Planning and Design of A 3g Radio Network
Planning and Design of A 3g Radio Network
Planning and Design of A 3g Radio Network
M. HEMANTH
T. MOUNISH KUMAR
T. RISHITHA REDDY
2014
(10261A0438)
T.MOUNISH KUMAR
(10261A0454)
T.RISHITHA REDDY
(10261A0455)
2014
CERTIFICATE
Date:
This is to certify that the project work entitled Planning and Design of 3G
Radio Network is a bonafide work carried out by
M. Hemanth
T.Mounish kumar
T. Rishitha Reddy
(10261A0438)
(10261A0454)
(10261A0455)
in
ELECTRONICS
&
COMMUNICATION
(Signature)
-------------------------Mr. K. Bala Prasad , Asst. Professor
Advisor/Liaison
(Signature)
------------------Dr. SP Singh
Professor & Head
ACKNOWLEDGEMENT
We express our deep sense of gratitude to our Faculty Liaison
Mr.P.Naresh,Sr. Engineer, RTTC, BSNL, Hyderabad, for his invaluable guidance and
encouragement in carrying out our Project.
We are highly indebted to our Faculty Liaison Mr. K. Bala Prasad ,
Assistant Professor, Electronics and Communication Engineering Department, who
has given us all the necessary technical guidance in carrying out this Project.
We wish to express our sincere thanks to Dr. S.P Singh, Head of the
Department of Electronics and Communication Engineering, M.G.I.T., for permitting
us to pursue our Project in BSNL and encouraging us throughout the Project.
Finally, we thank all the people who have directly or indirectly helped us
throughout the course of our Project.
M. Hemanth
T. Mounish Kumar
T. Rishitha Reddy
(ii)
ABSTRACT
The emergence of the Third Generation Mobile Technology (Commonly
known as 3G) has been the latest innovation in the field of communication. The first
generation included Analog
(iii)
Table of contents
CERTIFICATE FROM ECE DEPARTMENT
(i)
i (a)
ACKNOWLEDGEMENTS
(ii)
ABSTRACT
(iii)
LIST OF FIGURES
(iv)
LIST OF TABLES
(v)
CHAPTER 1. OVERVIEW
1.1 Introduction
1.3 Methodology
& TC-SCDMA)
2.2 Spread spectrum techniques
11
2.2.1 DS-CDMA
12
12
13
14
14
14
24
25
27
27
3.2.1 Dimensioning
28
31
33
37
37
39
39
39
3.6.3 HO Problems
39
40
40
40
42
46
47
REFERENCES
48
LIST OF FIGURES
1.1 Block Diagram..3
2.1.1 Graph .......5
2.1.2 Next Generation Mobile Communication.11
2.2.1 DS-CDMA............................................................................12
2.2.2 FH-CDMA ...13
2.2.3 TH-CDMA .. 13
2.2.4 MC-CDMA.,,14
2.3 Sequential Steps..15
3.1 Optimization in basic steps.. ..34
3.2 Simplified Network.........36
3.3 Workflow in Atoll45
4.1 Result 1................................... 46
4.2 Result 2.47
(iv)
LIST OF TABLES
3.7 Standard Deviation ..41
(v)
CHAPTER 1. OVERVIEW
1.1 Introduction
All cellular phone networks worldwide use a portion of the radio frequency spectrum
designated as ultra high frequency, or UHF,for the transmission and reception of
their signals. Radio frequencies used by 3g are 1920MHz-2170MHz, referred as
UMTS (Universal Mobile Telephone System) frequency bands. UMTS specifies a
complete network system, which includes the geographical coverage area of UMTS
network (UTRAN) and core network (CN) and the authentication of users via SIM
(Subscriber Identity Module) cards.
In India, the Department of Telecommunications (DoT) conducts auctions of
licenses for electromagnetic spectrum,In 2010 3G and 4G telecom spectrum were
auctioned in a highly competitive bidding in which the winner was tataindicom.
Hence Tataindicom was the first private operator to launch 3G services in India. Once
the operators get spectrum through auction process, they must build entirely new
networks and license entirely new frequencies, especially to achieve high data
transmission rates.3G UMTS networks are very popular in the world.3G cellular
systems are very flexible,but more complex and costly compared to older systems
which make the design and planning of such networks very challenging.In this
context the competitive market of cellular networks mandates operators to capitalize
on efficient design tools.Planning tools are used to optimize networks and keep both
operators and users satisfied
Hence, in this paper, evolution of 3g,planning 3g network and its design is studied
which provides an optimum topology for the network with which both the network
provider who aspires to have high number of users,capacity,quality with low capital
expenditure and users who expect to have high quality services at affordable prices
are both satisfied.This can be achieved by using proper planning tools.One of the
popular planning tool Atoll used for UMTS network design is studied under this
project.
1
1.3 Methodology
The radio network planning process can be divided into different phases. At the
beginning is the Preplanning phase. In this phase, the basic general properties of the
future network are investigated, for example, what kind of mobile services will be
offered by the network, what kind of requirements the different services impose on
the network, the basic network configuration parameters and so on.
The second phase is the main phase. A site survey is done about the to-be-covered
area, and the possible sites to set up the base stations are investigated. All the data
related to the geographical properties and the estimated traffic volumes at different
points of the area will be incorporated into a digital map, which consists of different
pixels, each of which records all the information about this point. Based on the
propagation model, the link budget is calculated, which will help to define the cell
range and coverage threshold. There are some important parameters which greatly
influence the link budget, for example, the sensitivity and antenna gain of the mobile
equipment and the base station, the cable loss, the fade margin etc. Based on the
digital map and the link budget, computer simulations will evaluate the different
possibilities to build up the radio network part by using some optimization algorithms.
The goal is to achieve as much coverage as possible with the optimal capacity, while
reducing the costs also as much as possible. The coverage and the capacity planning
2
are of essential importance in the whole radio network planning. The coverage
planning determines the service range, and the capacity planning determines the
number of to-be-used base stations and their respective capacities.
In the third phase, constant adjustment will be made to improve the network planning.
Through driving tests the simulated results will be examined and refined until the best
compromise between all of the facts is achieved. Then the final radio plan is ready to
be deployed in the area to be covered and served. The whole process is illustrated as
the figure below:
Begin
Pre-planning
Phase
Site Survey
Figure 1.1
Network
Planning
End
History of mobile telephony dates back to the 1920s with the use of radiotelephony by
the police department in United States. The initial equipment were bulky and phones
were not dealing well with obstacles and buildings. Introducing Frequency
Modulation (FM) in 1930s made some progress and helped radio communications in
battlefield during World War II. The first mobile telephony was introduced in 1940s
with limited capacity and manoeuvre. Mobile communications development
continued for years to become commercial as we have it today.Terminology of
generation is used to differentiate the significant technology improvement in cellular
networks which in turn, resulted in major changes in the wireless industry. The first
generation (1G) of cellular networks was introduced in late 1970s,which was
followed by the second generation (2G) in early 1990s, the third generation (3G) in
early 2000 and the fourth generation (4G) nowadays. Changes from analog to digital
technology, implementing new multiplexing and access techniques, employing new
codes and frequencies, introducing IP as a substitution for legacy transmission
methods and many other innovations resulted in networks with more services, higher
capacity, speed and security. In the following sub-sections, we explain different
generations of cellular networks and discuss their specifications.
Narrow
Broadband Era1Gbps
Multimedia
band
2.4kbps
64kbps
4G
2Mbps
3G
2G
Voice
1G
1980
1990
2000
2010
year..
generation mobile systems used analog transmission for speech services. In 1979, the
first cellular system in the world became operational by Nippon Telephone and
Telegraph (NTT) in Tokyo, Japan. Two years later, the cellular epoch reached Europe.
The two most popular analog systems were Nordic Mobile Telephones (NMT) and
Total Access Communication Systems (TACS). Other than NMT and TACS, some
other analog systems were also introduced in 1980s across the Europe. All of these
systems offered handover and roaming capabilities but the cellular networks were
unable to interoperate between countries. This was one of the inevitable
disadvantages of first-generation mobile networks.
In the United States, the Advanced Mobile Phone System (AMPS) was launched in
1982. The system was allocated a 40-MHz bandwidth within the 800 to 900 MHz
frequency range by the Federal Communications Commission (FCC) for AMPS. In
1988, an additional 10 MHz bandwidth, called Expanded Spectrum (ES) was
allocated to AMPS. It was first deployed in Chicago, with a service area of 2100
square miles. AMPS offered 832 channels, with a data rate of 10 kbps. Although
5
Omni directional antennas were used in the earlier AMPS implementation, it was
realized that using directional antennas would yield better cell reuse. In fact, the
smallest reuse factor that would fulfill the 18db signal-to-interference ratio (SIR)
using 120-degree directional antennas was found to be 7. Hence, a 7-cell reuse pattern
was adopted for AMPS. Transmissions from the base stations to mobiles occur over
the forward channel using frequencies between 869-894 MHz. The reverse channel is
used for transmissions from mobiles to base station, using frequencies between 824849 MHz.AMPS and TACS use the frequency modulation (FM) technique for radio
transmission. Traffic is multiplexed onto an FDMA (frequency division multiple
access) system.
2.1.2 The Second-generation & Phase 2+ Systems (Digital)
Second-generation (2G) mobile systems were introduced in the end of 1980s. Low bit
rate data services were supported as well as the traditional speech service. Compared
to first-generation systems, second-generation (2G) systems use digital multiple
access technology, such as TDMA (time division multiple access) and CDMA (code
division multiple access). Consequently, compared with first-generation systems,
higher spectrum efficiency, better data services, and more advanced roaming were
offered by 2G systems. In Europe, the Global System for Mobile Communications
(GSM) was deployed to provide a single unified standard. This enabled seamless
services through out Europe by means of international roaming. Global System for
Mobile Communications, or GSM, uses TDMA technology to support multiple users
During development over more than 20 years, GSM technology has been
continuously improved to offer better services in the market. New technologies have
been developed based on the original GSM system, leading to some more advanced
systems known as 2.5 Generation (2.5G) systems.In the United States, there were
three lines of development in second-generation digital cellular systems. The first
digital system, introduced in 1991, was the IS-54 (North America TDMA Digital
Cellular), of which a new version supporting additional services (IS-136) was
introduced in 1996. Meanwhile, IS-95 (CDMA One) was deployed in 1993. The US
Federal Communications Commission (FCC) also auctioned a new block of spectrum
in the 1900 MHz band (PCS), allowing GSM1900 to enter the US market. In Japan,
6
the Personal Digital Cellular (PDC) system, originally known as JDC (Japanese
Digital Cellular) was initially defined in 1990 .Since the first networks appeared at the
beginning of the 1991, GSM gradually evolved to meet the requirements of data
traffic and many more services than the original networks. GSM (Global System for
Mobile Communication): The main element of this system are the BSS (Base Station
Subsystem), in which there are BTS (Base Transceiver Station) and BSC (Base
Station Controllers); and the NSS (Network Switching Subsystem), in which there is
the MSC (Mobile Switching Centre); VLR (Visitor Location Register); HLR (Home
Location Register); AC (Authentication Centre) and EIR (Equipment Identity
Register). This network is capable of providing all the basic services up to 9.6kbps,
fax, etc. This GSM network also has an extension to the fixed telephony network. A
new design was introduced into the mobile switching center of second-generation
systems. In particular, the use of base station controllers (BSCs) lightens the load
placed on the MSC (mobile switching center) found in first generation systems. This
design allows the interface between the MSC and BSC to be standardized. Hence,
considerable attention was devoted to interoperability and standardization in secondgeneration systems so that carrier could employ different manufacturers for the MSC
and BSCs. In addition to enhancements in MSC design, the mobile-assisted handoff
mechanism was introduced. By sensing signals received from adjacent base stations, a
mobile unit can trigger a handoff by performing explicit signaling with the network.
GSM and VAS (Value Added Services): The next advancement in the GSM system
was the addition of two platforms, called Voice Mail Service (VMS) and the Short
Message Service Centre (SMSC). The SMSC proved to be incredibly commercially
successful, so much so that in some networks the SMS traffic constitutes a major part
of the total traffic. Along with VAS, IN (Intelligent services) also made its mark in the
GSM system, with its advantage of giving the operators the chance to create a whole
range of new services. Fraud management and prepaid services are the result of the
IN service.
GSM and GPRS (General Packet Radio Services): As requirement for sending data
on the air-interface increased, new elements such as SGSN (Servicing GPRS) and
GGSN (Gateway GPRS) were added to the existing GSM system. These elements
made it possible to send packet data on the air-interface. This part of the network
7
handling the packet data is also called the packet core network. In addition to the
SGSN and GGSN, it also contains the IP routers, firewall servers and DNS (Domain
Name Servers). This enables wireless access to the internet and bit rate reaching to
150 kbps in optimum conditions. The move into the 2.5G world began with General
Packet Radio Service (GPRS). GPRS is a radio technology for GSM networks that
adds packet-switching protocols, shorter setup time for ISP connections, and the
possibility to charge by the amount of data sent, rather than connection time. Packet
switching is a technique whereby the information (voice or data) to be sent is broken
up into packets, of at most a few Kbytes each, which are then routed by the network
between different destinations based on addressing data within each packet. Use of
network resources is optimized as the resources are needed only during the handling
of each packet. GPRS supports flexible data transmission rates as well as continuous
connection to the network. GPRS is the most significant step towards 3G.
8
2.1.3 The Third-generation (WCDMA in UMTS, CDMA2000 & TD-SCDMA)
In EDGE, high-volume movement of data was possible, but still the packet transfer on
the air-interface behaves like a circuit switch call. Thus part of this packet connection
efficiency is lost in the circuit switch environment. Moreover, the standards for
developing the networks were different for different parts of the world. Hence, it was
decided to have a network which provides services independent of the technology
platform and whose network design standards are same globally. Thus, 3G was born
The International Telecommunication Union (ITU) defined the demands for 3G
mobile networks with the IMT-2000standard. An organization called 3rd Generation
Partnership Project (3GPP) has continued that work by defining a mobile system that
fulfills the IMT-2000 standard. In Europe it was called UMTS (Universal Terrestrial
Mobile System), which is ETSI-driven. IMT2000 is the ITU-T name for the third
generation system, while cdma2000 is the name of the American 3G variant.
WCDMA is the air-interface technology for the UMTS. The main components
includes BS (Base Station) or nodeB, RNC (Radio Network Controller), apart from
WMSC (Wideband CDMA Mobile Switching Centre) and SGSN/GGSN. 3G
networks enable network operators to offer users a wider range of more advanced
services while achieving greater network capacity through improved spectral
efficiency. Services include wide-area wireless voice telephony, video calls, and
broadband wireless data, all in a mobile environment. Additional features also include
HSPA (High Speed Packet Access) data transmission capabilities able to deliver
speeds up to 14.4 Mbps on the downlink and 5.8 Mbps on the uplink. The first
commercial 3G network was launched by NTT DoCoMoin Japan branded FOMA,
based on W-CDMA technology on October 1, 2001. The second network to go
commercially live was by SK Telecom in South Korea on the 1xEV-DO (Evolution
Data Optimized) technology in January 2002 followed by another South Korean 3G
network was by KTF on EV-DO in May 2002. In Europe, the mass market
commercial 3G services were introduced starting in March 2003 by 3 (Part of
Hutchison Whampoa) in the UK and Italy. This was based on the W-CDMA
technology. The first commercial United States 3G network was by Monet Mobile
Networks, on CDMA2000 1x EV-DO technology and the second 3G network
operator in the USA was Verizon Wireless in October 2003 also on CDMA2000 1x
9
between the GSM/3G and All-IP is that the functionality of the RNC and BSC is now
distributed to the BTS and a set of servers and gateways. This means that this network
will be less expensive and data transfer will be much faster . 4G will make sure - The
user has freedom and flexibility to select any desired service with reasonable QoS and
affordable price, anytime, anywhere. 4G mobile communication services started in
2010 but will become mass market in about 2014-15.
Seamles
s acces
personalization
4G
Quality of service
IP based
11
2.2.1 DS-CDMA
In DS-CDMA, the original signal is multiplied directly by a fasterrate spreading code (Figure 4.1). The resulting signal then modulates the digital
wideband carrier. The chip rate of the code signal must be much higher than the bit
rate of the information signal. The receiver despreads the signal using the same code.
It has to be able to synchronize the received signal with the locally generated code;
otherwise, the original signal cannot be recovered
2.2.2 Frequency-Hopping CDMA
In frequency-hopping CDMA (FH-CDMA), the carrier frequency at
which the signal is transmitted is changed rapidly according to the spreading code.
Frequency-hopping (FH) systems use only a small part of the bandwidth at a time, but
the location of this part changes according to the spreading code (Figure 2.2.2). The
receiver uses the same code to convert the received signal back to the original. FHCDMA systems can be further divided into slow- and fast-hopping systems. In a slowhopping system, several symbols are transmitted on the same frequency, whereas in
fast-hopping systems, the frequency changes several times during the transmission of
one symbol. The GSM system is an example of a slow FH system because the
transmitters carrier frequency changes only with the time slot rate217 hops per
secondwhich is much slower than the symbol rate. Fast FH systems are very
expensive with current technologies and are not at all common.
12
13
Each sub-problem has been widely explored from different perspective. In the
following sub-sections, each sub-problem is explained and the major works in solving
them are presented.
input
input
Cell
planning
subproblem
input
Access
Network
planning
Core network
planning
subproblem
Final
solution
3. Maximize coverage;
4. Maximize signal quality;
5. Minimize electromagnetic field level.
Some of the above objectives are conflicting with each other. For example,
maximizing the coverage and capacity requires deploying more Node Bs, which in
turn, increases the network cost. Another example of contradiction happens when the
signal power is increased for maximizing signal quality, but that results in higher
electromagnetic field level. If more than one criterion is considered during the cell
planning, then multi-objective functions are defined. A multi-objective function can
be produced in either linear and/or weighted combinations of the single objectives.
Cell Planning Inputs and Outputs
As stated earlier inputs are required to solve the cell planning sub-problem. Usually,
the following inputs must be known :
1. The potential locations where Node Bs can be installed. Some geographical
constraints are applied to restrict the location selection;
2. The types (or models) of Node Bs, which includes, but not restricted to,
the cost and capacity (e.g. power, sensitivity, switch fabric capacity, interfaces, etc.);
3. The user distributions and their required amount of traffic (e.g. voice and data);
4. The coverage and propagation prediction.
Various planning algorithms are used to solve cell planning sub-problem. Each
algorithm may consider one or more of the objectives mentioned previously. The goal
of the cell planning sub-problem is to provide one or more of the following as output:
1. The optimal number of Node Bs;
2. The best locations to install Node Bs;
3. The types of Node Bs;
4. The configuration (height, sector orientation, tilt, power, etc.) of Node Bs;
5. The assignment of mobile users to Node Bs.
For the modeling of the cell planning sub-problem, it is required to know how to
represent users (or traffic) in the model. In the following sub-section traffic modeling
and related issues are discussed.
16
levels to minimize interference and guarantee adequate quality at the receiver. SIR in
UMTS networks is highly affected by the traffic distribution in the whole area and
unlike 2G networks, SIR should be equal to a given threshold.
In summary, the cell capacity and coverage depends on number of users and their
distribution, as well as Power Control (PC) mechanisms. The PC mechanisms are
based on either the received power or estimated SIR .
b.The Access Network Planning Sub-Problem
The main elements of the access network are the Node Bs and the RNCs. In order to
plan a good access network, the following inputs are usually needed:
1. The physical location of Node Bs (either given or obtained from the cell planning
sub-problem);
2. The traffic demand passing through each Node B (either given or obtained from
the cell planning sub-problem);
3. The set of potential locations to install RNCs;
4. The different types of RNCs;
5. The different types of links to connect Node Bs to RNCs;
6. The handover frequency between adjacent cells.
Depending on the planners decision, the Node Bs might connect internally to
each other based on some interconnection policies. This is also true for the RNCs.
By so doing, the access network sub-problem is more extended and will include the
trunks among Node Bs with themselves, as well as RNCs with themselves. In a tree
interconnection, the Node Bs are either directly connected to RNCs or cascaded.
Other types of topologies are star, ring and mesh. The interested reader on access
network topologies can find more information in reference. Given the above
inputs and the type of topology, the access network planning sub-problem aims to
find one or more of the following as output:
1. The optimal number of RNCs;
2. The best location to install RNCs;
3. The type of RNCs;
4. The link topology and type between Node Bs;
5. The link topology and type between RNCs;
18
upon position of BSs and RNCs. Charnsripinyo considers the design problem of
3G access network while maintaining an acceptable level of quality of service. The
problem was formulated as a Mixed Integer Programming (MIP) model to minimize
the cost.
Reliable Access Networks
Network reliability (also known also as survivability) describes the ability of the
network to function and not to disturb the services during and after a failure. The need
for seamless connectivity has been a motivation for many researchers to explore new
techniques for network reliability. Tripper et al.introduce a framework to study
wireless access network survivability, restoration techniques and metrics for
quantifying network survivability. Cellular networks are very vulnerable to failure.
Failure can happen either on node level (BSs, RNC, MSS, etc.) or link level.
Simulation results on different types of failure scenarios in a GSM network shows
that after a failure, mobility of users worsens network performance. For example, in
the case of a BS failure, users will try to connect to the adjacent BS and that degrades
the overall network performance.
Charnsripinyo and Tipper proposed an optimization based model for the design
of survivable 3G wireless access backhaul networks in a mesh topology. Using a
two-phase algorithm, the authors first design a network with a minimum cost,
considering Quality of Service (QoS) and then update the topology to satisfy
survivability constraints. They also propose a heuristic, based on the iterative
minimum cost routing to scale the design with real world networks. Increasing
reliability level imposes more cost to the network. There is a balance (best trade off)
between cost and reliability and in fact, higher level of reliability will obtrudes higher
cost to the network. Aiming to create a balance between reliability and cost,
Szlovencsak et al. introduce two algorithms. The first algorithm modifies
the cost-minimum tree as produced in [70, 71], while respecting reliability constraints
and retains the tree structure. In the second algorithm, different links are added to
the most vulnerable parts of the topology to have a more reliable network. Krendzel
et al. study cost and reliability of 4G RAN in a ring topology. They estimate
cost and reliability in different configurations and state that considering cost and
reliability, the most preferable topology for 4G RAN is a multi-ring.
21
Once the access planning sub-problem is solved and the number, type, location
and traffic of each RNC in known, the next step is to deal with the core planning
sub-problem.
c. The Core Network Planning Sub-Problem
The core network is the central part of UMTS network. The core network is
responsible for traffic switching, providing QoS, mobility management, network
security and billing. The core network consists of CS and PS domains. The key
elements of CS domain are MGW and MSS, responsible for switching and controlling
functions respectively. PS domains key elements are SGSN and GGSN which are
responsible for packet switching.
The core planning sub-problem supposes that the following inputs are known:
1. The physical location of RNCs (either given or obtained from the access planning
sub-problem);
2. The traffic demand (volume and type) passing through each RNC (either given
or obtained from the access planning sub-problem);
3. The potential location of core NEs;
4. The different types of core NEs;
5. The different types of links to connect RNCs to core NEs.
Depending on the network planner, the topology of the backbone network could
be a ring, a full mesh, a mesh or a layered structure format. In the ring topology,
each NE is directly attached to the backhaul ring. Full mesh topology provides point
to- point communication such that each NE is able to communicate to any other NE
directly. The mesh topology is a limited version of the full mesh, whereas due to some
restrictions, not every NE can communicate directly to another NE. For fast growing
networks, maintaining a mesh or full mesh topologies becomes an exhaustive task.
To solve this sub-problem, the layered structure was introduced. A layered structure
does not provide direct link between all NEs. A tandem layer, as the nucleus of the
layered structure is defined. The tandem layer is composed of a series of tandem
(transit) nodes, usually connected in full mesh. Then, all NEs in the core network are
connected to at least one of the tandem nodes. Ouyang and Fallah state that a
layered structure has many advantages compared to full mesh topology. Given that
22
the above inputs are available and the type of topology is decided, the core network
planning sub-problem aims to find one or more of the following as output:
1. The optimal number of core NEs;
2. The best location to install core NEs;
3. The type of core NEs;
4. The link topology and type between RNCs and core NEs;
5. The link topology and type between core NEs;
6. The traffic (volume and type) passing through core NEs.
The objective function is usually cost minimization, but other objectives like
reliability could be considered. Not many researches have been concentrated on the
core network planning sub-problem. The reason could be the similarity of this subproblem to the wired network planning problem.
Shalak et al present a model for UMTS network architecture and discuss the required
changes for upgrading core network from GSM to UMTS. They outline network
planning steps and compare the products of different vendors in packet switch
network.
Ricciato et al deal with the assignment of RNCs to SGSNs based on measured
data. The optimization goals are to balance the number of RNC among the available
SGSNs and minimize the inter-SGSN routing area updates. Required inputs are taken
from live network and the objective function is solved by linear integer programming
methods. While they focus on GPRS, they state that their approach can be applied
to UMTS networks. Harmatos et al deal with the interconnection of RNCs, placement
of MGWs and planning core network. They split the problem in two parts. The first
problem is interconnection of the RNCs which belong to the same UTRAN and the
placement and selection of a MGW to connect to core network. The second problem
is interconnection of MGWs together in backbone through IP or ATM network. The
objective is to design a fault-tolerant network with cost-optimal routing.
Remarks on Sequential approach
The sequential approach used to solve the design problem of UMTS networks has
many advantages, but some disadvantages. The sequential approach reduces the
complexity of the problem by splitting the problem into three smaller sub-problems.
By so doing, it is possible to include more details in each sub-problem for better
23
planning. On the contrary, solving each sub-problem independently from the other
sub-problems may result in local optimization, because interactions between sub
problems are not taken into account. Combining the result of sub-problems does not
guarantee a final optimal solution. There is no integration technique developed yet to
incorporate all partial solutions in order to obtain a global solution. Therefore, a
global view from the network is required to define a global problem.
2.3.2 Global Approach
As mentioned earlier, the sequential approach breaks down the UMTS planning
problem in three sub-problems and solves them solely. As shown in Figure 2.7, a
global (also called integrated) approach considers more than one sub-problem at a
time and solves them jointly. Since all interactions between the sub-problems are
taken into account, a global approach has the advantage of providing a solution close
to the global optimal, but at the expense of increasing problem complexity. The global
problem of UMTS networks which is composed of three NP-hard sub-problems is
also an NP-hard problem .The objective of the global approach is similar to the
objective of the sequential approach. Network cost minimization is the main concern,
while considering network performance. Researches on the global approach are
mainly divided into three directions:
i ) cell and access networks, ii ) access and core networks and iii ) the whole
network (i.e. cell, access and core).
Zhang et al proposed a global approach to solve the UTRAN planning problem.
Their model finds the number and location of Node Bs and RNCs, as well as
their interconnections in order to minimize the cost. Chamberland and Pierre
consider access and core network planning sub-problems. Given the BSs locations,
their model finds the location and types of BSCs and MSCs, types of links and topology of the network. Since such sub-problem is NP-hard, the authors propose a TS
algorithm and compare the results with a proposed lower bound. While the model
is targeted to GSM networks, it can be also applied to UMTS networks with minor
modifications. In another paper, Chamberland investigates the update problem
in UMTS network. Considering an update in BSs subsystem, the expansion model
accommodates the new BSs into the network. The model determines the optimal
24
access and core networks and considers network performance issues like call and
handover blocking. The author proposes a mathematical formulation of the problem,
as well as a heuristic based on the TS principle.Recently St-Hilaire et al proposed a
global approach in which the three subproblems are considered simultaneously. The
authors developed a mathematical programming model to plan UMTS networks in the
uplink direction. Through a detailed example, they compared their integrated
approach with the sequential approach. They proposed two heuristics based on local
search and tabu search to solve the NPhard problem. Furthermore, St-Hilaire et al
proposed a global model for the expansion problem of UMTS networks as an
extension to their previous works. They state that this model can also be used for
green field networks. They also present numerical results based on branch and bound
implementation.
2.3.3 Section Remarks
The purpose of solving the design problem of UMTS networks is to find an optimum
topology for the network which satisfies all desired constraints like cost, reliability,
performance and so on. Such an optimum topology is favorable for operators, as it can
save money and attract more subscribers. The planning problem of UMTS networks
is complex and composed of three sub-problems: the cell planning sub-problem, the
access network sub-problem and the core network sub-problem.
There are two main approaches to solve planning problem of UMTS networks:
the sequential and the global. In the sequential approach, the three sub-problems are
tackled sequentially. Since each sub-problem is less complex than the initial problem,
more details can be considered in each sub-problem. As a result, solving sub-problems
is easier than solving the whole planning problem. However, since each sub-problem
is solved independently from other sub-problems, the combination of the optimal
solution of each sub-problem (if obtained), might not result in an optimal solution for
the whole network planning problem. A global approach deals with more than one
sub-problem simultaneously and considers all interactions between the sub-problems.
The global problem has the advantage of finding good solutions which are closer to
the global minimum. The global problem is NP-hard and is more complex compared
to three sub-problems. To find approximate solutions for global planning of
25
26
27
location in the future network structure will save money for the operator.
Various steps in planning process:
Planning means building a network able to provide service to the customers wherever
they are. This work can be simplified and structured in certain steps. The steps are,
For a well-planned cell network planner should meet the following requirements,
Capacity Planning
Coverage Planning
Parameter Planning
Frequency Planning
Scrambling Code Planning
- Spectrum available;
- Subscriber growth forecast;
- Traffic density information.
Quality of Service:
- Area location probability (coverage probability);
- Blocking probability;
- End user throughput.
Radio Link Budgets:
There are some WCDMA-specific parameters in the link budget that are not used in a
TDMA-based radio access system such as GSM.
- Interference margin: The interference margin is needed in the link budget because
the loading of the cell, the load factor, affects the coverage. The more loading is
allowed in the system, the larger is the interference margin needed in the uplink, and
the smaller is the coverage area.
- Fast fading margin: Some headroom is needed in the mobile station transmission
power for maintaining adequate closed loop fast power control. This applies
especially to slow-moving pedestrian mobiles where fast power control is able to
effectively compensate the fast fading.
- Soft handover gain: Handovers soft or hard give a gain against slow fading by
reducing the required log-normal fading margin. This is because the slow fading is
partly uncorrelated between the base stations, and by making a handover the mobile
can select a better base station. Soft handover gives an additional macro diversity gain
against fast fading by reducing the required Eb/N0 relative to a single radio link, due
to the effect of macro diversity combining.
b) Load Factors:
The second phase of dimensioning is estimating the amount of supported traffic per
base station site. When the frequency reuse of a WCDMA system is 1,the system is
typically interference-limited and the amount of interference and delivered cell
capacity must thus be estimated.
29
c) Capacity Upgrade Paths:
When the amount of traffic increases, the downlink capacity can be upgraded in a
number of different ways. The most typical upgrade options are:
----more power amplifiers if initially the power amplifier is split between sectors;
---two or more carriers if the operators frequency allocation permits;
---transmit diversity with a 2nd power amplifier per sector.The availability of these
capacity upgrade solutions depends on the base station manufacturer. All these
capacity upgrade options may not be available in all base station types.
These capacity upgrade solutions do not require any changes to the antenna
configurations, only upgrades within the base station cabinet are needed on the site.
The uplink coverage is not affected by these upgrades. The capacity can be improved
also by increasing the number of antenna sectors, for example, starting with Omnidirectional antennas and upgrading to 3-sector and finally to 6-sector antennas. The
drawback of increasing the number of sectors is that the antennas must be replaced
increased number of sectors also brings improved coverage through a higher antenna
gain.
d) Capacity per km2:
Providing high capacity will be challenging in urban areas where the offered amount
of traffic per km2 can be very high. In this section we evaluate the maximal capacity
that can be provided per km2 using macro and micro sites. For the micro cell layer we
assume a maximum site density of 30 sites per km2. Having an even higher site
density is challenging because the other-to-own cell interference tends to increase and
the capacity
per site decreases. Also, the site acquisition may be difficult if more sites are needed.
e) Soft Capacity:
Erlang Capacity: In the dimensioning the number of channels per cell was calculated.
Based on those figures, we can calculate the maximum traffic density that can be
supported with a given blocking probability. If the capacity is hard blocked, i.e.
limited by the amount of hardware, the Erlang capacity can be obtained from the
Erlang B model. If the maximum capacity is limited by the amount of interference in
the air interface, it is by definition a soft capacity, since there is no single fixed value
for the maximum capacity. The soft capacity can be explained as follows. The less
30
interference is coming from the neighbouring cells, the more channels are available in
the middle cell, With a low number of channels per cell, i.e. for high bit rate real time
data users, the average loading must be quite low to guarantee low blocking
probability.
f) Network Sharing:
The cost of the network deployment can be reduced by network sharing.If both
operators have their own core networks and share a common radio access network,
RAN, the solution offers cost savings in site acquisition, civil works, transmission,
RAN equipment costs and operation expenses. Both operators can still keep their full
independence in core network, services and have dedicated radio carrier frequencies.
When the amount of traffic increases in the future, the operators can exit the shared
RAN and continue with separate RANs.
3.2.2 Capacity and Coverage Planning and Optimisation:
a. Iterative Capacity and Coverage Prediction:
In this section, detailed capacity and coverage planning are presented. In the detailed
planning phase real propagation data from the planned area is needed, together with
the estimated user density and user traffic. Also, information about the existing base
station sites is needed in order to utilize the existing site investments. The output of
the detailed capacity and coverage planning are the base station locations,
configurations and parameters. Since, in WCDMA, all users are sharing the same
interference resources in the air interface, they cannot be analysed independently.
Each user is influencing the others and causing their transmission powers to change.
These changes themselves again cause changes, and so on. Therefore, the whole
prediction process has to be done iteratively until the transmission powers stabilize.
Also, the mobile speeds, multipath channel profiles, and bit rates and type of services
used play a more important role than in second generation TDMA/FDMA systems.
Furthermore, in WCDMA fast power control in both uplink and downlink, soft/softer
handover and orthogonal downlink channels are included, which also impact on
system performance. The main difference between WCDMA and TDMA/FDMA
coverage prediction is that the interference estimation is already crucial in the
coverage prediction phase in WCDMA. In the current GSM coverage planning
31
processes the base station sensitivity is typically assumed to be constant and the
coverage threshold is the same for each base station. In the case of WCDMA the base
station sensitivity depends on the number of users and used bit rates in all cells, thus it
is cell- and service-specific. Note also that in third generation networks, the downlink
can be loaded higher than the uplink or vice versa.
b. Planning Tool:
In second generation systems, detailed planning concentrated strongly on coverage
planning. In third generation systems, a more detailed interference planning and
capacity analysis than simple coverage optimisation is needed. The tool should aid the
planner to optimise the base station configurations, the antenna selections and antenna
directions and even the site locations, in order to meet the quality of service and the
capacity and service requirements at minimum cost.
c. Network Optimisation:
Network optimisation is a process to improve the overall network quality as
experienced by the mobile subscribers and to ensure that network resources are used
efficiently. Optimisation includes:
1. Performance measurements.
2. Analysis of the measurement results.
3. Updates in the network configuration and parameters.
The measurements can be obtained from the test mobile and from the radio network
elements. The WCDMA mobile can provide relevant measurement data, e.g. uplink
transmission power, soft handover rate and probabilities, CPICH Ec/N0 and downlink
BLER. The network performance can be best observed when the network load is high.
With low load some of the problems may not be visible. Therefore, we need to
consider artificial load generation to emulate high loading in the network. A high
uplink load can be generated by increasing the Eb/N0 target of the outer loop power
control. In the normal operation the outer loop power control provides the required
quality with minimum Eb/N0. If we increase manually the Eb/N0 target, e.g. 10 dB
higher than the normal operation point, that uplink connection will cause 10 times
more interference and converts 32 kbps connection into 320 kbps high bit rate
connection from the interference point of view.
32
Network optimization can initially be seen as a very involving task as a large number
of variable are available for tuning impacting different aspect of the network
performance. To simplify this process a step by step procedure is adopted.This
approach divides the optimization in simpler steps, each step focusing on a limited set
of parameters:
RF optimization will focus mainly on RF configuration and in a lesser extend on
reselection parameters.
Voice optimization will focus on improving the call setup (Mobile Originated and
Mobile Terminated) and call reliability thus focusing mainly on access and handover
parameters.
Advance services optimization will rely extensively on the effort conducted for
voice. The initial part of the call setup are similar for all type of services and vendor
have not at this point defined different set of handover parameters for different
services. Consequently, optimizing these services will focus on a limited set of
parameters,
typically power assignment, quality target, and Radio Link Control (RLC) parameters.
Inter-system (also known as inter-RAT) change (both reselection and handover)
optimization is considered once the WCDMA layer is fully optimized. This approach
will ensure that inter-system parameters are set corresponding to finalize boundaries
rather than set to alleviate temporary issues due to sub-optimal optimization.
33
Ensure system ready for optimization
Pre -Optimization
task
CS and PS
service
optimization
Optionally
(Inter system
change
optimization)
intra-frequency parameters
Figure 3.1: Optimization process is simplified by
34
Even after careful RF planning, the first step of optimization should concentrate on
RF. This is necessary as RF propagation is affected by so many factors (e.g.,
buildings, terrain, vegetation) that propagation models are never fully accurate. RF
optimization thus takes into account any difference between predicted and actual
coverage, both in terms of received signal (RSCP) and quality of the received signal
(Ec/No). In addition, the same qualitative metrics defined for planning should be
considered: cell overlap, cell transition, and coverage containment of each cell. At the
same time, assuming that a UE is used to measure the RF condition in parallel with a
pilot scanner, reselection parameters can be estimated considering the dynamics
introduced by the mobility testing: during network planning dynamics cannot be
considered, as network planning tools are static by nature, only simulating at one
given location at a time, irrespectively of the
surrounding. In addition, once the RF conditions are known, dynamic simulation can
be used to estimate the handover parameters, even before placing any calls on the
network.
Service optimization is needed to refine the parameter settings (reselection, access,
and handover). Because the same basic processes are used for all types of services, it
is best to set the parameters while performing the simpler and best understood of all
services: voice. This is fully justified when the call flow difference for the different
services are considered. Either for access or for handover, the main difference
between voice and other service is the resource availability. Testing with voice service
greatly simplifies the testing procedure and during analysis limits the number of
parameters, or variable, to tune. During this effort, parameter setting will be the main
effort. Different set of parameters are likely to be tried to achieve the best possible
trade-offs: coverage vs. capacity, call access (Mobile Originated and Mobile
Terminated) reliability vs. call setup latency, call retention vs. Active Set size... to
name only a few. The selection of the set of parameter to leave on the network will
directly depend on the achieved performance and the operator priority (coverage,
capacity, access performance, call retention performance)
Once the performance targets are reached for voice, optimizing advanced services
such as video-telephony and packet switched (PS) data service will concentrate on a
limited set of parameters: power assignment, quality target (BLER target), and any
35
bearer specific parameters (RLC or channel switching parameters for example).
During the optimization of PS data service the importance of good RF optimization
will be apparent when channel switching is considered. Channel switching is a
generic terms referring to the capability of the network to change the PS data bearer to
a different data rate (rate switching) or a different state (type switching). Channel
switching is intended to adapt the bearer to the user needs and to limit the resource
utilization. Saving resource will be achieved by reducing the data rate when the RF
conditions degrade. By reducing the data rate, the spreading gain increases, resulting
in lower required power to sustain the link.
Last once the basic services are optimized, i.e., the call delivery and call retention
performance targets are met, the optimization can focus on service continuity, through
inter-system changes, and application specific optimization. Inter-system changes,
either reselection or handover, should be optimized only once the basic WCDMA
optimization is completed to ensure that the WCDMA coverage boundary is stable.
Application optimization can be seen as a final touch of service optimization and is
typically limited to the PS domain. In this last effort, the system parameters are
optimized not to get the highest throughput or the lowest delay, but to increase the
subscriber experience while using a given application. A typical example would be the
image quality for video-streaming. The main issue for this application base
optimization might be that different applications may have conflicting requirements.
In such case, the different applications and their impacts on the network should be
prioritized. Irrespective of the application considered, the main controls available to
the optimization engineer are the RLC parameters, target quality and channel
switching parameters. The art in this process is to improve the end user perceived
quality, while improving the cell or system capacity.
36
37
owner of the site may not want to sell it, or it may be unusable (e.g., in the middle of a
pond) or located in a restricted area. Environmental and health issues can also have an
impact. Base station towers in an open country landscape may irritate some people.
The radiation from base station transmitters is also a concern for some (with or
without a good reason, most often without). All these issues have to be taken into
consideration. The number of HOs has to be minimized as they create signalling
traffic in the network. This can be done, for example, with large macrocells.
Sectorization has to be considered and implemented where required.
This includes the following procedures in this phase:
Detailed characterization of the radio environment;
Control channel power planning;
Soft handover (SHO) parameter planning;
Interfrequency (HO) planning;
Iterative network coverage analysis;
Radio-network testing.
connection improves, but on the other hand SHOs may increase the overall system
interference level and, thus, also decrease the system capacity. An SHO also
consumes more data transmission capacity in the network. An operator must find a
suitable compromise between these extremes; an SHO must be provided to those
mobiles that really need it, but not to others, to keep the level of system interference
bearable. This can be accomplished by the correct setting of the SHO parameters.
3.6.3 HO Problems
Interfrequency HO is a difficult procedure for a mobile station as it
has to perform preliminary measurements on other channels at the same time that it is
receiving and transmitting on the current channels. There are two alternatives for
accomplishing this procedure: (1) the use of two receivers in a mobile station, or (2)
the use of compressed mode. As extra hardware is expensive, the most attractive
method for achieving the interfrequency HO is the compressed mode. Compressed
mode means that during some timeslots the data to be sent is squeezed, or
compressed, and sent over a shorter period of time. This leaves some spare time,
which can be used for measurements on other frequencies. This compression is
achieved by temporarily using a lower spreading ratio. Compressed mode may also be
necessary in the uplink if the measured frequency is close to it, as is the case with
GSM-1800.
3.6.4 Hierarchical Cells
Hierarchical cell structures are by no means a WCDMA-specific
issue. They are also used a lot in other network technologies, such as GSM. In
WCDMA networks the hierarchical structures have some specific characteristics
though.
The most straightforward way to implement a hierarchical cell
structure in WCDMA is to allocate each hierarchy level on a different frequency. If an
operator has been allocated a 15-MHz frequency area, this is enough for three
frequency channels, each having a 5-MHz bandwidth. If the operator also has an
unpaired TDD frequency slice, this can be used as one hierarchical level. One channel
could be used for picocells, a second for microcells, and a third for macrocells.
Another possibility is to use one frequency for macrocells and two frequencies for
microcells. Picocells, if needed, could be provided on the TDD frequency.
40
Penetration Losses are the additional losses required to cover inside a building. InBuilding (or in-car) losses should be given as a mean value and standard deviation.
Both must be taken into account if we wish to cover more than the average
building.
For in-building coverage, the total standard deviation is calculated as the square root
of the sum of the squares of the standard deviation (building and fade margin).
From the factors, we can simply derive the required design thresholds for the system
quality we wish.
Required signal level = sensitivity + Penetration Loss + Fade Margin.
Network Modelling
selection)
Interference analysis
QoS analysis: FER/BER/BLER/MOS prediction plots
Neighbour planning and handover analysis
43
Atoll in UMTS
Atoll was the first UMTS Network planning solution available on market. since then
it has stayed ahead of the competition with continuous improvements made through
close cooperation with GSM and UMTS operators.
Simulation and Analysis
State-of-the-art Monte Carlo UMTS/ HSPA/MBMS simulator including DL and UL
power control, RRM, HSDPA/HSUPA to R99 downgrading and carrier allocation
algorithms
Real-time point analysis tool
Generation of prediction plots, based on simulations or on user-defined cell load
figures, including:
Ec/Io prediction plots
Downlink and Uplink Eb/Nt prediction plots
Service areas
Number of servers
Handover areas
Interference and pilot pollution
BER/FER/BLER
HSPA prediction plots
MBMS service area
Neighbour and Scrambling Code Planning
GSM/UMTS Co-planning
Site sharing
Simultaneous display and analysis of 2G and 3G networks
Inter-technology handover modelling based on proven intra/inter-technology
Network Configuration
Cell Load
-Add Network
Element
Basic
Predictions
Conditions
Study
Reports
UMTS/HSPA
Predictions
Traffic Maps Prediction
Monte-Carlo
-Change
Parameters
(Best
Server,
Signal
Level)
Neighbor
Allocation
Simulation
User-Defined
Scrambling
Code Values
Plan
SITE PARAMETERS
TRANSMIT PARAMETERS
CELL PARAMETERS
46
4.3 Conclusion
In conclusion, we summarize the important points in the project. The importance of
network planning is studied. All criteria that have to be considered during designing
of a network are studied and we have planned UMTS radio network for Gachibowli
with around thirty UMTS Node-Bs or base stations in such a way that the signal at a
street is better than 65dBm so that indoor coverage of at least -85dBm is available
assuming losses of around 20dB in Gachibowli wherein IT Park, Financial Village,
L&T Infocity, are covered in an effective manner. This completes the report.
47
References:
[1] Walke, B., Mobile Radio Networks, New York: Wiley, 1999.
[2] Prasad, R., W. Mohr, and W. Konhauser, Third Generation Mobile Communication
Systems, Norwood, MA: Artech House, 2000.
[3] Ranta, P., and M. Pukkila, Interference Suppression by Joint Demodulation,
in
GSM Evolution Towards 3rd Generation Systems,( Z. Zvonar, P. Jung, and K.
Kammerlander eds.), Norwell, MA: Kluwer Academic Publishers, 1999.
[4] Shapira, J., Microcell Engineering in CDMA Cellular Networks, IEEE
Trans. on
Vehicular Technology, Vol. 43, No. 4, November 1994.
[5] Koshi, V. Radio Network Planning for UTRA TDD Systems, 3G Mobile
Communication
Technologies Conference, IEE Conference Publication No. 471, London, March
2000.
48