Academia.eduAcademia.edu

CHAPTER 1 INTRODUCTION

AI-generated Abstract

This chapter discusses the evolution of mobile wireless technologies, specifically focusing on 5G technology and its significant advantages over previous generations. It explores the architecture of mobile networks, including layers such as the core layer, media handling, and access layers, and highlights key protocols like DHCP essential for establishing connectivity. Furthermore, it examines concepts of terminal and personal mobility within traditional mobile networks and their implications for contemporary mobile internet usage.

CHAPTER 1 INTRODUCTION Mobile wireless industry has started its technology creation, revolution and evolution since early 1970s. In the past few decades, mobile wireless technologies have experience 4 or 5 generations of technology revolution and evolution, namely from 0G to 4G. The cellular concept was introduced in 5G Technology stands for 5th Generation Mobile technology. 5G technology has changed the means to use cell phones within very high bandwidth. User never experienced ever before such a high value technology. Nowadays mobile users have much awareness of the cell phone (mobile) technology. The 5G technologies include all type of advanced features which makes 5G technology most powerful and in huge demand in near future. Were a 5G family of standards to be implemented, it would likely be around the year 2020, according to some sources. A new mobile generation has appeared every 10th year since the first 1G system (NMT) was introduced in 1981, including the 2G (GSM) system that started to roll out in 1992, 3G (W-CDMA/FOMA), which appeared in 2001, and "real" 4G standards fulfilling the IMT-Advanced requirements, that were ratified in 2011 and products expected in 2012-2013. Predecessor technologies have occurred on the market a few years before the new mobile generation. New mobile generations are typically assigned new frequency bands and wider spectral bandwidth per frequency channel (1G up to 30 kHz, 2G up to 200 kHz, 3G up to 5 MHz, and 4G up to 40 MHz), but the main issue that there is little room for new frequency bands or larger channel bandwidths. From end users point of view, previous mobile generations have implied substantial increase in peak bitrate (i.e. physical layer net bitrates for short-distance communication). However the major difference from a user point of view between 4G and 5G techniques must be something else than increased maximum throughput; for example lower battery consumption, lower outage probability (better coverage), high bit rates in larger portions of the coverage area, cheaper or no traffic fees due to low infrastructure deployment costs, or higher aggregate capacity for many simultaneous users. 1.1 History of Mobile Communication In 2008, the South Korean IT R&D program of "5G mobile communication systems based on beam-division multiple access and relays with group cooperation" was formed.On 8 October 2012, the UK's University of Surrey secured £35M for new 5G research centre, joint funded between the British government's UK Research Partnership Investment Fund (UKRPIF) and a consortium of key international mobile operators and infrastructureprovidersincluding Huawei,Samsung, Telefonica Europe, Fujitsu Laboratories Europe,Rohde & Schwarz, and Aircom International– it will offer testing facilities to mobile operators keen to develop a mobile standard that uses less energy and radio spectrum whilst delivering faster than current 4G speeds, with aspirations for the new technology to be ready within a decade. On 1 November 2012, the EU project "Mobile and wireless communications Enablers for the Twenty-twenty Information Society" (METIS) starts its activity towards the definition of 5G. METIS intends to ensure an early global consensus on these systems. In this sense, METIS will play an important role of building consensus among other external major stakeholders prior to global standardization activities. This will be done by initiating and addressing work in relevant global fora (e.g. ITU-R), as well as in national and regional regulatory bodies. In February 2013, ITU-R Working Party 5D (WP 5D) started two study items: (1) Study on IMT Vision for 2020 and beyond, and; (2) Study on future technology trends for terrestrial IMT systems. Both aiming at having a better understanding of future technical aspects of mobile communications towards the definition of the next generation mobile. On 1 October 2013, NTT (Nippon Telegraph and Telephone), the same company to launch world first1G network in Japan, wins Minister of Internal Affairs and Communications Award at CEATEC for 5G R&D efforts. On 6 November 2013, Huawei announced plans to invest a minimum of $600 million into R&D for next generation 5G networks capable of speeds 100 times faster than modern LTE networks. 1.1.1 First Generation 1G (or 1-G) refers to the first-generation of wireless telephone technology, mobile telecommunications. These are the analog telecommunications standards that were introduced in the 1980s and continued until being replaced by 2G digital telecommunications . The main difference between two succeeding mobile telephone systems, 1G and 2G, is that the radio signals that 1G networks use are analog, while 2G networks are digital. Although both systems use digital signaling to connect the radio towers (which listen to the handsets) to the rest of the telephone system, the voice itself during a call is encoded to digital signals in 2G whereas 1G is only modulated to higher frequency, typically 150 MHz and up. The inherent advantages of digital technology over that of analog meant that 2G networks eventually replaced them almost everywhere. Figure 1.1 1G Mobile One such standard is NMT (Nordic Mobile Telephone), used in Nordic countries, Switzerland,Netherlands, EasternEurope and Russia.Othersinclude AMPS (Advanced Mobile Phone System) used in the North America and Australia,TACS (Total Access Communications System) in the United Kingdom ,C 450 in west Germany , Portugal and SouthAfrica, Radiocom2000 in France,and RTMI inItaly. In Japan there were multiple systems. Three standards, TZ-801, TZ-802, and TZ-803 were developed by NTT (Nippon Telegraph and Telephone Corporation), while a competing system operated by DDI (DainiDenden Planning, Inc.) used the JTACS (Japan Total Access Communications System) standard.Antecedent to 1G technology is the mobile radio telephone, or 0G. The first commercially automated cellular network (the 1G generation) was launched in Japan by NTT (Nippon Telegraph and Telephone) in 1979, initially in the metropolitan area of Tokyo. Within five years, the NTT network had been expanded to cover the whole population of Japan and became the first nationwide 1G network. In 1981, this was followed by the simultaneous launch of the Nordic Mobile Telephone (NMT) system in Denmark, Finland, Norway and Sweden. NMT was the first mobile phone network featuring international roaming. The first 1G network launched in the USA was Chicago-based Ameritech in 1983 using theMotorolaDynaTAC mobile phone. Several countries then followed in the early to mid-1980s including the UK, Mexico and Canada. 1.1.2 Second generation 2G (or 2-G) is short for second-generation wireless telephone technology. Second generation 2G cellular telecom networks were commercially launched on the GSM standard in Finland by Radiolinja (now part ofElisa Oyj) in 1991. Three primary benefits of 2G networks over their predecessors were that phone conversations were digitally encrypted; 2G systems were significantly more efficient on the spectrum allowing for far greater mobile phone penetration levels; and 2G introduced data services for mobile, starting with SMS text messages. 2G network allows for much greater penetration intensity. 2G technologies enabled the various mobile phone networks to provide the services such as text messages, picture messages and MMS (multi media messages). All text messages sent over 2G are digitally encrypted, allowing for the transfer of data in such a way that only the intended receiver can receive and read it. Figure 1.2 2G Mobile After 2G was launched, the previous mobile telephone systems were retrospectively dubbed 1G. While radio signals on 1G networks are analog, radio signals on 2G networks are digital. Both systems use digital signaling to connect the radio towers (which listen to the handsets) to the rest of the telephone system. 2G has been superseded by newer technologies such as 2.5G, 2.75G, 3G, and 4G; however, 2G networks are still used in many parts of the world. While digital calls tend to be free of static and background noise, the lossy compression they use reduces their quality, meaning that the range of sound that they convey is reduced. Talking on a digital cell phone, a caller hears less of the tonality of someone's voice. In less populous areas, the weaker digital signal transmitted by a cellular phone may not be sufficient to reach a cell tower. This tends to be a particular problem on 2G systems deployed on higher frequencies, but is mostly not a problem on 2G systems deployed on lower frequencies. National regulations differ greatly among countries which dictate where 2G can be deployed. Analog has a smooth decay curve, but digital has a jagged steppy one. This can be both an advantage and a disadvantage. Under good conditions, digital will sound better. Under slightly worse conditions, analog will experience static, while digital has occasional dropouts. As conditions worsen, though, digital will start to completely fail, by dropping calls or being unintelligible, while analog slowly gets worse, generally holding a call longer and allowing at least some of the audio transmitted to be understood. 1.1.3 Generation 2.5 2.5G ("second and a half generation") is used to describe 2G-systems that have implemented a packet-switched domain in addition to the circuit-switched domain. It does not necessarily provide faster services because bundling of timeslots is used for circuit-switched data services (HSCSD) as well. The first major step in the evolution of GSM networks to 3G occurred with the introduction of General Packet Radio Service (GPRS). CDMA2000 networks similarly evolved through the introduction of 1xRTT. The combination of these capabilities came to be known as 2.5G. GPRS could provide data rates from 56 kbit/s up to 115 kbit/s. It can be used for services such as Wireless Application Protocol (WAP) access, Multimedia Messaging Service (MMS), and for Internet communication services such as email and World Wide Web access. GPRS data transfer is typically charged per megabyte of traffic transferred, while data communication via traditional circuit switching is billed per minute of connection time, independent of whether the user actually is utilizing the capacity or is in an idle state. 1xRTT supports bi-directional (up and downlink) peak data rates up to 153.6 kbit/s, delivering an average user data throughput of 80-100 kbit/s in commercial networks. It can also be used for WAP, SMS and MMS services, as well as Internet access. 1.1.4 Third Generation 3G, short for third Generation, is the third generation of mobile telecommunications technology. 3G telecommunication networks support services that provide an information transfer rate of at least 200kbit/s. Later 3G releases, often denoted 3.5G and 3.75G, also provide mobile broadband access of several Mbit/s to smartphones and mobile modems in laptop computers.3G finds application in wireless voice telephony, mobile Internet access, fixed wireless Internet access, video calls and mobile TV. This is a set of standards used for mobile devices and mobile telecommunication use services and networks that comply with the International Mobile Telecommunications-2000 (IMT-2000) specifications by the International Telecommunication Union. 3G finds application in wireless voice telephony, mobile Internet access, fixed wireless Internet access, video calls and mobile TV. Figure 1.3 3G Mobile System A new generation of cellular standards has appeared approximately every tenth year since 1G systems were introduced in 1981/1982. Each generation is characterized by new frequency bands, higher data rates and non-backwards compatible transmission technology. The first release of the 3GPP Long Term Evolution (LTE) standard does not completely fulfill the ITU 4G requirements called IMT-Advanced. First release LTE is not backwards compatible with 3G, but is a pre-4G or 3.9G technology,[citation needed]however sometimes branded 4G by the service providers. Its evolution LTE Advanced is a 4G technology.WiMAX is another technology verging on or marketed as 4G. Several telecommunications companies market wireless mobile Internet services as 3G, indicating that the advertised service is provided over a 3G wireless network. Services advertised as 3G are required to meet IMT-2000 technical standards, including standards for reliability and speed (data transfer rates). To meet the IMT-2000 standards, a system is required to provide peak data rates of at least 200 kbit/s (about 0.2 Mbit/s). However, many services advertised as 3G provide higher speed than the minimum technical requirements for a 3G service. Recent 3G releases, often denoted 3.5G and 3.75G,alsoprovide mobilebroadband accessofseveral Mbit/s to  smartphones and mobile modems in laptop computers. 1.1.5 Fourth Generation In telecommunication systems, 4G is the fourth generation of mobile phone mobile communicationtechnology standards. It is a successor to the third generation (3G) standards. A 4G system providesmobile ultra-broadband Internet access, for example to laptops with USB wireless modems, tosmartphones, and to other mobile devices. Conceivable applications include amended mobile web access,IP telephony, gaming services, high-definition mobile TV, video conferencing, 3D television, and cloud computing. Figure 1.4 4G Mobile Two 4G candidate systems are commercially deployed: the Mobile WiMAX standard (first used in South Korea in 2006), and the first-release Long Term Evolution (LTE) standard (in Oslo, Norway and Stockholm, Sweden since 2009). It has however been debated if these first-release versions should be considered to be 4G or not, as discussed in the technical definition section below. In the United States, Sprint (previously Clearwire) has deployed Mobile WiMAX networks since 2008, andMetroPCS was the first operator to offer LTE service in 2010. USB wireless modems have been available since the start, while WiMAX smartphones have been available since 2010, and LTE smartphones since 2011. Equipment made for different continents is not always compatible, because of different frequency bands. Mobile WiMAX is currently (April 2012) not available for the European market. The 4G system was originally envisioned by the Defense Advanced Research Projects Agency (DARPA). The DARPA selected the distributed architecture and end-to-end Internet protocol (IP), and believed at an early stage in peer-to-peer networking in which every mobile device would be both a transceiver and a router for other devices in the network, eliminating the spoke-and-hub weakness of 2G and 3G cellular systems. Since the 2.5G GPRS system, cellular systems have provided dual infrastructures: packet switched nodes for data services, and circuit switched nodes for voice calls. In 4G systems, the circuit-switched infrastructure is abandoned and only a packet-switched network is provided, while 2.5G and 3G systems require both packet-switched and circuit-switched network nodes, i.e. two infrastructures in parallel. This means that in 4G, traditional voice calls are replaced by IP telephony. BhartiAirtel launched India's first 4G service, using TD-LTE technology, in Kolkata on April 10, 2012.Fourteen months prior to the official launch in Kolkata, a group consisting of China Mobile, BhartiAirtel and SoftBank Mobile came together, called Global TD-LTE Initiative (GTI) in Barcelona, Spain and they signed the commitment towards TD-LTE standards for the Asian region. It must be noted that Airtel's 4G network does not support mainstream 4G phones such as Apple iPhone 5/5s, Samsung Galaxy Note 3, Samsung Galaxy S4 and others. 1.2 Overview of 5G 5G (5th generation mobile networks or 5th generationwireless systems) denote the next major phase of mobile telecommunications standards beyond the current 4G/IMT-Advanced standards. 5G is also referred to as beyond 2020 mobile communications technologies. 5G does not describe any particular specification in any official document published by any telecommunication standardization body.Although updated standards that define capabilities beyond those defined in the current 4G standards are under consideration, those new capabilities are still being grouped under the current ITU-T 4G standards. A new mobile generation has appeared approximately every 10th year since the first 1G system, Nordic Mobile Telephone, was introduced in 1981. The first 2G system started to roll out in 1992, the first 3G system first appeared in 2001 and 4G systems fully compliant with IMT Advanced were standardised in 2012. The development of the 2G (GSM) and 3G (IMT-2000 and UMTS) standards took about 10 years from the official start of the R&D projects, and development of 4G systems started in 2001 or 2002.[1][2]Predecessor technologies have occurred on the market a few years before the new mobile generation, for example the pre-3G system CdmaOne/IS95 in the US in 1995, and the pre-4G systems Mobile WiMAX in South-Korea 2006, and first release-LTE in Scandinavia 2009. Mobile generations typically refer to non–backwards-compatible cellular standards following requirements stated by ITU-R, such as IMT-2000 for 3G and IMT-Advanced for 4G. In parallel with the development of the ITU-R mobile generations, IEEE and other standardisation bodies also develop wireless communication technologies, often for higher data rates and higher frequencies but shorter transmission ranges. The first gigabit IEEE standard was WiGig or IEEE 802.11ac, commercially available since 2013, soon to be followed by the multi gigabit standard IEEE 802.11ad Based on the above observations, some sources suggest that a new generation of 5G standards may be introduced approximately in the early 2020s. However, still no international 5G development projects have officially been launched, and there is still a large extent of debate on what 5G is exactly about. Prior to 2012, some industry representatives have expressed skepticism towards 5G but later took a positive stand.New mobile generations are typically assigned new frequency bands and wider spectral bandwidth per frequency channel (1G up to 30 kHz, 2G up to 200 kHz, 3G up to 20 MHz, and 4G up to 100 MHz), but skeptics argue that there is little room for larger channel bandwidths and new frequency bands suitable for land-mobile radio.[5] From users' point of view, previous mobile generations have implied substantial increase in peak bitrate (i.e. physical layer net bitrates for short-distance communication), up to 1 Gbit/s to be offered by 4G. If 5G appears, and reflects these prognoses, the major difference from a user point of view between 4G and 5G techniques must be something else than increased peak bit rate; for example higher number of simultaneously connected devices, higher system spectral efficiency (data volume per area unit), lower battery consumption, lower outage probability (better coverage), high bit rates in larger portions of the coverage area, lower latencies, higher number of supported devices, lower infrastructure deployment costs, higher versatility and scalability or higher reliability of communications. 1.3 Proposals for 5G Standard Horizon 2020 Advanced 5G Network Infrastructure for Future Internet PPP (Point-to-Point Protocol) Creating a Smart Network that is Flexible, Robust and Cost Effective. The use of telecommunication services in Europe has grown remarkably in the last two decades. Responding to ever increasing demand for data usage it is today commonly agreed that a continuous long-term investment into research and development and the deployment of communication systems standards has to be ensured. Global standards are a fundamental cornerstone in reaching ubiquitous connectivity, ensuring worldwide interoperability, enabling multi-vendor capability and economies of scale. Early investments of technology research are necessary to develop and build these global standards – today providing a complete and ready platform for a wide range of businesses of the entire value chain in Europe and beyond, enabling employment and economic growth. Discussions are currently on-going between stakeholders to target a new partnership initiative (e.g. by means of a Public-Private-Partnership – PPP) under the European Horizon 2020 (H2020) framework to address Information and Communication Technology (ICT) Infrastructures and Communication Networks. According to ITU statistics the number of subscribers is growing globally. The number of fixed telephone lines with low global penetration is decreasing, whereas the number of mobile subscribers is growing fast globally. However, globally and also in developing countries the number of subscriptions is increasing. Further growth of the European ICT market can be expected by an upgrade of European communication networks to real broadband systems with significantly higher sustainable throughput rates than today and an increased use of communication networks for other critical infrastructures. 1.4 5G Technology Offers 5G technology is going to be a new mobile revolution in mobile market. Through 5G technologynowyoucanuse worldwide cellular phones and this technology also strike the china mobile market and a user being proficient to get access to Germany phone as a local phone. With the coming out of cell phone alike to PDA now your whole office in your finger tips or in your phone. 5G technology has extraordinary data capabilities and has ability to tie together unrestricted call volumes and infinite data broadcast within latest mobile OS. It can handle best technologies and offer priceless handset to their customers. 5G Technologies have an extraordinary capability to support Software and Consultancy. The Router and switch technology used in 5G network providing high connectivity. 1.5 Organization of the Report This report starts with an overview of the 5G Mobile Technology and it’s previous Generations. The report is organized as follows: Chapter 1: Introduction - This chapter briefly explains the overview of the report. Chapter 2: An Introduction To IP Networks - This chapter summarises the key elements and ideas of IP networking, focusing on the current state of the Internet. Chapter 3: IP Mobility - This chapter covers a number of topics that explore what is meant by ‘IP mobility’. Chapter 4: 5GApplications - This chapter explains about the 5G technology Applications and Advantages. Chapter 5: Conclusions - This chapter summarizes the major accomplishments of this report. CHAPTER 2 AN INTRODUCTION TO IP NETWORKS 2.1 Introduction The Internet is believed by many to have initiated a revolution that will be asfar reaching as the industrial revolution of the 18th and 19th centuries. However, as the collapse of many ‘dot.com’ companies has proven, it is not easy to predict what impact the Internet will have on the future. In part, these problems can be seen to be those normally associated with such a major revolution. Or perhaps the dot.com collapses were simply triggered by the move of the Internet from primarily a government funded university research network to commercial enterprise and the associated realisation that the Internet is not ‘free’. Thus, whilst the Internet is widely acknowledged to have significantly changed computing, multimedia, and telecommunications, it is not clear how these technologies will evolve and merge in the future. It is not clear howcompanies will be able to charge to cover the costs of providing Internet connectivity, or for the services provided over the Internet. What is clear is that the Internet has already changed many sociological, cultural, and business models, and the rate of change is still increasing. Despite all this uncertainty, the Internet has been widely accepted by users and has inspired programmers to develop a wide range of innovative applications. It provides a communications mechanism that can operate over different access technologies, enabling the underlying technology to be upgraded without impacting negatively on users and their applications.The ‘Inter-Networking’ functionality that it provides overcomes many of the technical problems of traditional telecommunications, which related to inter-working different network technologies. By distinguishing between the network and the services that may be provided over the network, and by providing one network infrastructure for all applications, and so removing the inter-working issues, the Internet has reduced many of the complexities, and hence the cost, of traditional telecommunications systems. The Internet has an open standardisation process that enables its rapid evolution to meet user needs. The challenge for network operators is therefore to continue to ensure that these benefits reach the user, whilst improving the network. This chapter summarises the key elements and ideas of IP networking, focusing on the current state of the Internet. As such, the Internet cannot support real-time, wireless, and mobile applications. However, the Internet is continually evolving, and Chapters detail some of the protocols currently being developed in order to support such applications. This chapterbegins with a brief history of IP networks, as understanding the history leads to an understanding of why things are the way they are. It then looks at the IP standardisation process, which is rather different from the 3G process. A person, new to the IP world, who attempted to understand the IP and associated protocols, and monitor the development of new protocols, would probably find it useful to have an understanding of the underlying philosophy and design principles usually adhered to by those working on Internet development. The section on IP design principles also discusses the important concept of layering, which is a useful technique for structuring a complex problem – such as communications . These design principles are considered as to whether they are actually relevant for future wireless systems, and then each of the Internet layers is examined in more depth to give the reader an understanding of how, in practice, the Internet works. Thepenultimate section is devoted to indicating some of the mechanisms that are available to provide security on the Internet. 2.2 A Brief History of IP IP networks trace their history back to work done at the US Department of Defense (DoD) in the 1960s, which attempted to create a network that was robust under wartime conditions. This robustness criterion led to the development of connectionless packet switched networks, radically different from the familiar phone networks that are connection-oriented, circuitswitched networks. In 1969, the US Advanced Research Projects AgencyNetwork – ARPANET – was used to connect four universities in America. In 1973, this network became international, with connectivity to University College London in the UK, and the Royal Establishment in Norway. By 1982, the American Department of Defence had defined the TCP/IP protocols as standard, and the ARPANET became the Internet as it is known today – a set of networks interconnected through the TCP/IP protocol suite. This decision by the American DoD was critical in promoting the Internet, as now all computer manufacturers who wished to sell to the DoDneeded to provide TCP/IP-capable machines. By the late 1980s, the Internet was showing its power to provide connectivity between machines. FTP, the file transfer protocol, could be used to transfer files between machines (such as PCs and Apple Macs), which otherwise had no compatible floppy disk or tape drive format. The Internet was also showing its power to provide connectivity between people through e-mail and the related newsgroups, which were widely used within the world-wide university and research Figure 2.1 Showing Internet Growth community. In the early 1990s, the focus was on managing the amount of information that was already available on the Internet, and a number of information retrieval programs were developed – for example, 1991 saw the birth of the World Wide Web (WWW). In 1993 MOSAIC1, a ‘point and click’ graphic interface to the WWW, was created. This created great excitement, as the potential of an Internet network could now be seen by ordinary computer users. In 1994, the first multicast audio concert (the Rolling Stones) took place. By 1994, the basic structure of the Internet as we know it today was already in place. In addition to developments in security for the Internet, the following years have seen a huge growth in the use of these technologies. Applications that allow the user to perform online flight booking or listen to a local radio station whilst on holiday have all been developed from this basic technology set. From just four hosts in 1969, there has been an exponential growth in the number of hosts connected to the Internet – as indicated in Figure 3.1. There are now estimated to be over 400 million hosts, and the amount of traffic is still doubling every 6 months. In addition to the rapid technical development, by the1980s there weregreat changes in the commercial nature of the Internet. In 1979, the decision was made, by several American Universities, the DoD, and the NSF (the American National Science Foundation) to develop a network independent from the DoD’s ARPANET. By 1990, the original ARPANET was completely dismantled, with little disruption to the new network. By the late 1980s, the commercial Internet became available through organisations such as CompuServe. In 1991, the NSFNET lifted its restrictions on the use of its new network, opening up the means for electronic commerce. In 1992, the Internet Society (ISOC) was created. This non-profit, non-government, international organisation is the main body for most of the communities (such as the IETF, which develops the Internet standards) that are responsible for the development of the Internet. By the 1990s, companies were developing their own private Intranets, using the same technologies and applications as those on the Internet. These Intranets often have partial connectivity to the Internet. 2.3 IP Standardisation Process Within the ISOC, as indicated in Figure 3.2, there are a number of bodies involved in the development of the Internet and the publication of standards. The Internet Research Task Force, IRTF, is involved in a number of long-term research projects. Many of the topics discussed within the mobility and QoS chapters of this book still have elements within this research community. An example of this is the IRTF working group that is investigating the practical issues involved in building a differentiated services network. The Internet Engineering Task Force, IETF, is responsible for technology transfer from this research community, which allows the Internet to evolve. This body is organised into a number of working groups, each of which has a specific technical work area. These groups communicate and work primarily through e-mail. Additionally, the IETF meets three times a year. The output of any working group is a set of recommendations to the IESG, the Internet Engineering Steering Group, for standardisation of protocols and protocol usage. The IESG is directly responsible for the movement of documents towards standardisation and the final approval of specifications as Internet standards. Appeals against decisions made by IESG can be made to the IAB, the Internet Architectures Board. This technical advisory body aims to maintain a cohesive picture of the Internet architecture.Finally IANA, the Internet Assigned Number Authority, has responsibility for assignment of unique parameter values (e.g. port numbers). The ISOC isresponsible for the development only of the Internet networking standards. Separate organisations exist for the development of many other aspects of the ‘Internet’ as we know it today; for example, Web development takesplace in a completely separate organisation. There remains a clear distinction between the development of the network and the applications andservices that use the network. Figure 2.2 Showing organisation of the Internet society Within this overall framework, the main standardisation work occurs within the IETF and its working groups. This body is significantly different from conventional standards bodies such as the ITU, International Telecommunication Union, in which governments and the private sector co-ordinate global telecommunications networks and services, or ANSI, the American National Standards Institute, which again involves both the public and private sector companies. The private sector in these organisations is often accused of promoting its own patented technology solutions to any particular problem, whilst the use of patented technology is avoided within the IETF. Instead, the IETF working groups and meetings are open to any person who has anything to contribute to the debate. This does not of course prevent groups of people with similar interest all attending. Businesses have used this route to ensure that their favourite technology is given a strong (loud) voice The work of the IETF and the drafting of standards are devolved to specific working groups. Each working group belongs to one of the nine specific functional areas, covering Applications to SubIP. These working groups, which focus on one specific topic, are formed when there is a sufficient weight of interest in a particular area. At any one time, there may be in the order of 150 working groups. Anybody can make a written contribution to the work of a group; such a contribution is known as an Internet Draft. Once a draft has been submitted, comments may be made on the e-mail list, and if all goes well, the draft may be formally considered at the next IETF meeting. These IETF meetings are attended by upwards of 2000 individual delegates. Within the meeting, many parallel sessions are held by each of the working groups. The meetings also provide a time for ‘BOF’, Birds of a Feather,sessions where people interested in working on a specific task can see if there is sufficient interest to generate a new working group. Any Internet Draft has a lifetime of 6 months, after which it is updated and re-issued following e-mail discussion, adopted, or, most likely, dropped. Adopted drafts become RFCs – Request For Comments – for example, IP itself is described in RFC 791. Working groups are disbanded once they havecompleted the work of their original charter. 2.4 IP Design Principles In following IETF e-mail debates, it is useful to understand some of the underlying philosophy and design principles that are usually strongly adhered to by those working on Internet development. However, it is worth remembering that the RFC1958, ‘Architectural Principles of the Internet’ does state that ‘‘the principle of constant change is perhaps the only principle of the Internet that should survive indefinitely’’ and, further, that ‘‘engineering feed-back from real implementations is more important that any architectural principles’’. Two of these key principles, layering and the end-to-end principle, have already been mentioned in the introductory chapter as part of the discussion of the engineering benefits of ‘IP for 3G’. However, this section begins with what is probably the more fundamental principle: connectivity. Figure 2.3 Possible carriers of IP packets - satellite, radio, telephone wires, birds. 2.4.1 Connectivity Providing connectivity is the key goal of the Internet. It is believed thatfocusing on this, rather than on trying to guess what the connectivity might be used for, has been behind the exponential growth of the Internet. Since the Internet concentrates on connectivity, it has supported the development not just of a single service like telephony but of a whole host of applications all using the same connectivity. The key to this connectivity is the inter-networking2 layer – the Internet Protocol provides one protocol that allows for seamless operation over a whole range of different networks. Indeed, the method of carrying IP packets has been defined for each of the carriers illustrated in Figure 3.3. Further details can be found in RFC2549, ‘IP over avian carriers with Quality of Service’. Connectivity, clearly a benefit to users, is also beneficial to the network operators. Those that provide Internet connectivity immediately ensure that their users can reach users world-wide, regardless of local network providers. To achieve this connectivity, the different networks need to be interconnected. They can achieve this either through peer–peer relationships with specific carriers, or through connection to one of the (usually non-profit ) Internet exchanges. These exchanges exist around the world and provide the physical connectivity between different types of network and different network suppliers (the ISPs, Internet Service Providers). An example of an Internet Exchange is LINX, the London Internet Exchange. This exchange is significant because most transatlantic cables terminate in the UK, and separate submarine cables then connect the UK, and hence the US, to the rest of Europe. Thus, it is not surprising that LINX statistics show that 45% of the total Internet routing table is available by peering at LINX. A key difference between LINX and, for example the telephone systems that interconnect the UK and US, is its simplicity. The IP protocol ensures that interworking will occur. The exchange could be a simple piece of Ethernet cable to which each operator attaches a Standard router. Figure 2.4 Circuit switched communications. The focus on connectivity also has an impact on how protocol implementations are written. A good protocol implementation is one that works well with other protocol implementations, not one that adheres rigorously to the standards3. Throughout the Internet development, the focus is always on producing a system that works. Analysis, models, and optimisations are all considered as a lower priority. This connectivity principle can be applied in the wireless environment when considering that, in applying the IP protocols, invariably a system is developed that is less optimised, specifically less bandwidth-efficient, than current 2G wireless systems. But a system may also be produced that gives wireless users immediate access to the full connectivity of the Internet, using standard programs and applications, whilst leaving much scope for innovative, subIP development of the wireless transmissionsystems. Further, as wireless systems do become broadband – like the Hiperlan system4, for example – such efficiency concerns will become less significant. Connectivity was one of the key drivers for the original DoD network. The DoD wanted a network that would provide connectivity, even if large parts of the network were destroyed by enemy actions. This, in turn, led directly to the connectionless packet network seen today, rather than a circuit network such as that used in 2G mobile systems. Circuit switched networks, illustrated in Figure 3.4, operate by the user first requesting that a path be set up through the network to the destination – dialling the telephone n that a path be set up through the network to the destination – dialling the telephone number. This message is propagated through the network and at each switching point, information (state) is stored about the request, and resources are reserved for use by the user. Only once the path has been established can data be sent. This guarantees that data will reach the destination. All the data to the destination will follow the same path, and so will arrive in the order sent. In such a network, All the data to the destination will follow the same path, and so will arrive in the order sent. In such a network, it is easy to ensure that the delays data experience through the network are constrained, as the resource reservation means the delays data experience through the network are constrained, as the resource reservation means that there is no possibilityof congestion occurring except at call set-up time (when a busy tone is received and sent to the calling occurring except at call set-up time (when a busy tone is received and sent to the calling party). However, there is often a significant time delay before data can be sent – it can easily take 10 s to connect an international, or mobile, call. Further, this type of network may be used inefficiently as a full circuit-worth of resources are reserved, irrespective of whether they are used. This is the type of network used in standard telephony and 2G mobile systems. Figure 2.5 Packet switched network. In a connectionless network (Figure 3.5), there is no need to establish a path for the data through the network before data transmission. There is no state information stored within the network about particular communications. Instead, each packet of data carries the destination address and can be routed to that destination independently of the other packets that might make up the transmission. There are no guarantees that any packet will reachthe destination, as it is not known whether the destination can be reached when the data are sent. There is no guarantee that all data will follow the same route to the destination, so there is no guarantee that the data will arrive in the order in which they were sent. There is no guarantee that data will not suffer long delays due to congestion. Whilst such a network may seem to be much worse than the guaranteed network described above, its original advantage from the DoD point of view was that such a network could be made highly resilient. Should any node be destroyed, packets would still be able to find alternative routes through the network. No state information about the data transmission could be lost, as all the requiredinformation is carried with each data packet. Another advantage of the network is that it is more suited to delivery of small messages, whereas in a circuit-switched connection oriented network the amount of data and time needed in order to establish a data path would be significant compared with the amount of useful data. Short messages, such as data acknowledgements, are very common in the Internet. Indeed, measurements suggest that half the packets on the Internet are no more than 100 bytes long (although more than half the total data transmitted comes in large packets). Similarly, once a circuit has been established, sending small, irregular data messages would be highly inefficient – wasteful of bandwidth, as, unlike the packet network, other data could not access the unused resources. 2.4.2 The End-to-end Principle The second major design principle is the end-to-end principle. This is really a statement that only the end systems can correctly perform functions that are required from end-to-end, such as security and reliability, and therefore, these functions should be left to the end systems. End systems are the hosts that are actually communicating, such as a PC or mobile phone. Figure 3.6 illustrates the difference between the Internet’s end-to-end approach andthe approach of traditional telecommunication systems such as 2G mobile systems. This end-to-end approach removes much of the complexity from the network, and prevents unnecessary processing, as the network does not need to provide functions that the terminal will need to perform for itself. This principle does not mean that a communications system cannot provide enhancement by providing an incomplete version of any specific function (for example, local error recovery over a lossy link). As an example, we can consider the handling of corrupted packets. During the transmission of data from one application to another, it is possible that errors could occur. In many cases, these errors will need to be corrected for the application to proceed correctly. It would be possible for the network to ensure that corrupted packets were not delivered to the terminal by running a protocol across each segment of the network that provided local error correction. However, this is a slow process, and with modern and reliable networks, most hops will have no errors to correct. The slowness of the procedure will even cause problems to certain types of application, such as voice, which prefer rapid data delivery and can tolerate a certain level of data corruption. If accurate data delivery is important, despite thenetwork error correction, the application will still need to run an end-to-end error correction protocol like TCP. This is because errors could still occur in the data either in an untrusted part of the network or as it is handled on the end terminals between the application sending/receiving the data and the terminal transmitting/delivering the data. Thus, the use of hop-by-hop error correction is not sufficient for many applications’ requirements, but leads to an increasingly complex network and slower transmission. 2.4.3 Layering and Modularity One of the key design principles is that, in order to be readily implementable, solutions should be simple and easy to understand. One way to achieve this is through layering. This is a structured way of dividing the functionality in order to remove or hide complexity. Each layer offers specific services to upper layers, whilst hiding the implementation detail from the higher layers. Ideally, there should be a clean interface between each layer. This simplifiesprogramming and makes it easier to change any individual layer implementation. For communications, a protocol exists that allows a specific layer on one machine to communicate to the peer layer on another machine. Each protocol belongs to one layer. Thus, the IP layer on one machine communicates to the peer IP layer on another machine to provide a packet delivery service. This is used by the upper transport layer in order to provide reliable packet delivery by adding the error recovery functions. Extending this concept in the orthogonal direction, we get the concept of modularity. Any protocol performs one well-defined function (at a specific layer). These modular protocols can then be reused. Ideally protocols should be reused wherever possible, and functionality should not be duplicated. The problems of functionality duplication were indicated in the previous section when interactions occur between similar functionality provided at different layers. Avoiding duplication also makes it easier for users and programmers to understand the system. The layered model of the Internet shown is basically a representation of the current state of the network – it is a model that is designed to describe the solution. The next few sections look briefly at the role of each of the layers. 2.5 Making the Internet Work Figure 2.6 Internet Layers. So far we have considered the history of the Internet. We have looked at the standardisation process – so if you want to become involved in Internet protocol development, you should know where to start. Fundamentally, the Internet is based on packet switching technology, and the IP protocol in particular is key to providing the connectivity. The Internet is described by over 3000 RFC’s. What are the actual physical bits that are required to build an Internet, and which of the many thousands of protocols will need to be implemented? This material starts to answer these questions. This section is structured roughly according to the layers described above. The link layer section looks at how a user connects to the Internet using either a modem on the end of a residential telephone line, or an Ethernet connection in an office. Within the Inter-networking layer routers – the main physical bits of equipment that make up the Internet – are considered, as well as Internet addressing and IPv6 – the next generation of Internet Protocol. The transport layer is covered only briefly, as much of this material is covered in Chapter .The application layer is not the focus of this book, but we still mention the key Domain Name System (DNS). 2.5.1 Link Layer This is the layer beneath the internetworking layer. It is responsible for actually transmitting the packets on a network. There are a large range of different link layer technologies, but two the public telephone network and Ethernet- based networks – are most commonly used to connect host machines to the Internet. This section considers how these connections are established and used. When a user first logs on to the computer and starts up the Internet service, the computer has to ‘dial up’ the Internet, that is ring the telephone number of the ISP. At the far end of the phone call are (a rack of) 56-kbit/s modems and an Internet access server (Figure). Once the telephone connection is established, a link layer protocol is run between the user’s machine and the server. This link layer protocol is typically PPP (Point to Point Protocol). This has three roles. First, this establishes and tests the link, then it helps themachine with any required auto-configuration – typically assigning it an IP address. During this process, authentication also needs to take place typically the user needs to enter a user name and password. This authenticates the user to use the ISP service Ethernet links, the basis of most Local Area Networks (office LANs) are slightly more complex than telephone links into the Internet. This is because an Ethernet cable is typically5 shared between many different machines, which may all wish to simultaneously communicate and after all, if everyone shouts at once no one will hear anything. The Ethernet driver on a host is therefore responsible for ensuring that the Ethernet frame does not interfere with any other transmissions that may already be using the link. It achieves this through use of the CD/CSMA (Carrier Sense Multiple Access with Collision Detect) protocol. In essence, this involves the Ethernet driver listening to the cable. When it detects a quiet spell, it can begin to transmit its packet. Whilst transmitting the packet, it must still listen to the Ethernet. This enables it to detect if its packet has been scrambled through interference with another packet from another machine, which may have also heard the same quiet spell. If it does detect such a collision, the Ethernet driver must cease transmission and wait for a random time period before trying to transmit again. (The random time period ensures that the two machines become out of synchronisation with each other). The Ethernet network gives a maximum of 10 Mbit/s6, which can be shared between hosts. Whilst often a host will be able to access the full Ethernet bandwidth, at other times congestion may occur, which severely affects the Quality of Service. Before a user can use the Ethernet to transmit IP packets, some other link layer protocols may need to run. One of the most common of these is DHCP (the Dynamic Host Configuration Protocol). (Computers on an Ethernet may have fixed IP addresses, in which case, DHCP is not needed.) DHCP automatically provides configuration parameters to the host. These parameters include the IP address that the host should use, and the address of the router, which provides connectivity to the rest of the Internet. DHCP is also typically used to configure default values for services such as DNS. Whilst there is usually no security required to establish the Ethernet link, a user will Figure 2.7 Process of sending IP packets over Ethernet. stillneed to authenticate themselves to various servers such as those providing email or WWW access. The computer can then send and receive IP data packets. Because the link is shared, it is possible that an outbound IP packet is destined to another host on the same Ethernet rather than the router. The IP module is responsible for deciding to which computer the packet should be sent. 2.5.2 Inter-networking Layer Figure 2.8 Inter-networking layer The key layer is the inter-networking layer. This is the layer that has given the Internet its name. Its key protocol, IP, is the only protocol that every Internet enabled host will use. This is the protocol that ensures the connectivity. IP is a packet-oriented, connectionless protocol. Because different physical networks, such as the Ethernet and telephone networks above, both support IP protocols, the two hosts on different networks can communicate easily. The key function that this layer provides is routing – delivering packets from one IP address to another. In this section, the IP packet header is considered first. The key component of this header is the IP address which has already been mentioned several times. Then, the sectiondiscusses how packets are delivered from one host, across a network of routers all the way to the destination host. There then follows a discussion on the key concept of subnets, a discussion on one of the current problems of IP, the shortage of IP addresses, and finally a discussion on IPv6, the next generation of IP. 2.5.3 Transport Layer Many of the functions that a user expects from the network are actually delivered in the Internet through end-to-end transport layer protocols. Key transport layer protocols are TCP, UDP, and RTP. These are all discussed more thoroughly in the chapter on QoS. Whilst TCP is by far the most commonly used transport layer protocol, often when considering QoS, it isassumed that UDP is used as the transport protocol. Figure2.9 Transport layer This is particularly true when QoS is being used to support real-time services, as TCP requires data re-transmissions that would add unacceptable delay for such applications. One practical issue here is that many network devices such as NATs and firewalls do not readily pass UDP packets. 2.5.4 Application Layer In an IP 3G network, there is little extra that needs to be known about this layer. By providing a straight IP network, the wireless operator will have created the environment to allow users to access freely any application of their choice. Users will no longer be tied to the applications and protocols supported by their network operator. This freedom can only be expected to drive up the usage and value of the network. Similarly, any service providedby the network operator for the benefit of its customers, such as e-mail or WWW hosting or even WWW proxy servers to provide information filtering, can also be offered for use by any other user of the Internet. Within this flexibility, however, there is one key application layer service, the Domain Name Service or DNS .that most network operators would provide. To access almost every Internet service, a user will use a server name, such as www.futures.bt.co.uk, to identify the service required. The first process that the host needs to do is to translate this name into an IP address. To do this, the Domain Name System (DNS) is used. DNS is a distributed network of servers that provides a resolution of a name to an IP address. Names are used because humans find them much easier to remember, whereas computers find numbers easier to handle . A distributed approach is used to provide this service, as it is impractical for any machine to keep a table mapping all machine names and addresses. So, to find the IP address, the user’s computer first checks to see if they have cached an entry for this name. Caching simply refers to the process by which a user’s computer, having consulted the DNS system for a name to address mapping, will remember the mapping for a short while, as it is highly likely that the name will be used again shortly. If there is no cached entry, the user’s computer then sends a request to a DNS server within the local domain. The address of this server is something that has to be provided during the Internet configuration process. This DNS server then performs the look-up. If the server has no entry in its records for the machine required, it passes the query to a root server. This server only knows the identities of machines that keep records of the top-level domains (.uk,.com, and so on), so it forwards the query down a level to a suitable server (Figure 3.18). Thus, the query is propagated through the hierarchy until it reaches the server authoritative for the domain bt.co.uk. This server contains a mixture of actual address mappings, for example for the server www on bt.co.uk, for which it is the authoritative server, and also a set of references to servers for domains such as futures. The machine responsible for futures DNS then returns the address of the machine, called www (probably a web server), to the original queering DNS server. This server caches the result and returns the IP address of the machine to the user’s PC. CHAPTER 3 IP MOBILITY 3.1 Introduction - What is IP Mobility? This part covers a number of topics that explore what is meant by ‘IP mobility’. First, two (complementary) types of mobility are distinguished: personaland terminal. Second, the different protocol layers that mobility can be solved at are looked at. Third, we discuss how the distinction between an identifier and a locator offers an insight into mobility. 3.1.1 Personal and Terminal Mobility A traditional mobile network like GSM supports two types of mobility: terminal and personal. Terminal mobility refers to a mobile device changing its point of attachmentto the network. The aim is that during a session, a mobile terminal can move around the network without disrupting the service. This is the most obvious feature that a mobile network must support. Personal mobility refers to a user moving to a different terminal and remaining in contact. 2G networks have a form of personal mobility, because a user can remove their SIM card and put it in another terminal so they can still receive calls, they still get billed, and their personal preferences like short dialling codes still work. What mobility is widely available in the Internet today? First, portability, which is similar to terminal mobility, but there is no attempt to maintain a continuous session. It deals with the case where the device plugs into a new network access point in between sessions. For example, a user can plug in their laptop into any network port on their home network, for example the one which happens to be nearest to where they are working. However, true terminal mobility is not currently widely available in the Internet today. Second, personal mobility, for example through a WWW portal (such as Yahoo), enables users to send and receive web-based e-mail from Internet cafes. However, this type of solution is limited in that it only operates through the portal. The bulk of this chapter considers various techniques and protocols that would enable IP terminal mobility. Section 5.3 also briefly considers how an IP network can effectively support personal mobility. 3.1.2 The Problem of IP Mobility Broadly speaking, there are three ways of viewing the ‘problem of IP mobility’, corresponding to the three layers of the protocol stack that people think it should be solved at: Solve the problem at Layer 2 – This view holds that the problem is one of ‘mobility’ to be solved by a specialist Layer 2 protocol, and that the movement should be hidden from the IP layer. Solve the problem at the ‘application-layer’ – This view similarly holdsthat IP layer should not be affected by the mobility, but instead solves the problem above the IP layer. Solve the problem at the IP layer – Roughly speaking, this view holds that ‘IP mobility’ is a new problem that requires a specialist solution at the IPlayer. Layer 2 Solutions This approach says that mobility should be solved by a specialist Layer 2 protocol. As far as the IP network is concerned, mobility is invisible – IP packets are delivered to a router and mobility occurs within the subnet below. The protocol maintains a dynamic mapping between the mobile’s fixed IP address and its variable Layer 2 address and is equivalent to a specialist version of Ethernet’s ARP (Address Resolution Protocol). This is approach taken by wireless local area networks (LANs), e.g. through the inter-access point protocol (IAPP). Although such protocols can be fast, they do not scale to large numbers of terminals. Also, a Layer 2 mobility solution is specific for a particular Layer 2, and so inter-technology handovers will be hard. Application-layer Solutions Although generally called application-layer solutions, really this term means any solution above the IP layer. An example here would be to reuse DNS (Domain Name System). Today, DNS is typically used to resolve a website’s name (e.g. www.bt.com) into an address (62.7.244.127), which tells the client where the server is with the required web page. At first sight, this is promising for mobility, and in particular for personal mobility; this is promising for mobility, and in particular for personal as the mobile moves, it could acquire a new IP address and update its DNS entry, and so it could still be reached. However, DNS has been designed under the assumption that servers move only very rarely, so to improve efficiency, the name-toaddress mapping is cached throughout the network and indeed in a client’s machine. This means that if DNS is used to solve the mobility problem, often an out-of-date cached address will be looked up. Although there have been attempts to modify DNS to make it more dynamic, essentially by forcing all caching lifetimes to be zero, this makes everyone suffer the same penalty even when it is not necessary2. Section 5.3 examines another IP protocol, SIP, for application-layer mobility. Layer 3 Solutions The two previous alternatives have limited applicability, so the IP community has been searching for a specialist IP-mobility solution, in other words, a Layer 3 solution. It also fits in with one of the Internet’s design principles: ‘obey the layer model’. Since the IP layer is about delivering packets, then from a purist point of view, the IP layer is the correct place to handle mobility. From a practical point of view, it should then mean that other Internet protocols will work correctly. For example, the transport and higher-level connections are maintained when the mobile changes location. Overall, this suggests that Layer 3 and sometimes Layer 2 solutions are suitable for terminal mobility, and ‘application’ layer solutions are sometimes suitable for personal mobility. 3.1.3 Locators vs. Identifiers Oneway of thinking about the problem of mobility is that we must have some sort of a dynamic mapping between a fixed identifier (who is the mobile to whom packets are to be delivered?) and a variable locator (where in the network are they?). So, for instance, in the DNS case, the domain name is the identifier, and the IP address is the locator. Similarly, the www portal (e.g. Yahoo) would have the user’s e-mail address (for example) as their identifier and again their current IP address as the locator . Since mobility is so closely tied to the concept of an identifier, it is worth thinking about the various types of identifier that are likely: Terminal ID – This is the (fixed) hardware address of the network interfacecard. A terminal may actually have several cards. Subscription ID – This is something that a service provider uses as its own internal reference, for instance so that it can keep records for billing purposes. The service provided could be at the application or network layer. User ID – This identifies the person and clearly is central to personal mobility. During call set-up, there could be some process to check the user’s identity (perhaps entering a password) that might trigger association with a subscription id. In general, a user ID might be associated with one or many subscription ids, or vice versa. Session ID – This identifies a particular voice-over-IP call, instamessagingsession, HTTP session, and so on. Whereas the other three ids arefixed (or at least long-lasting), the session ID is not. 3.2 Mobile IP - A Solution for Terminal Macromobility 3.2.1Outline of Mobile IP The best-known proposal for handling macro mobility handovers is Mobile IP. Mobile IP has been developed over several years at the IETF, initially for IPv4 and now for IPv6 as well. Mobile IP is the nearest thing to an agreed standard in IP-mobility. However, despite being inexistence for many years and being conceived as a short-term solution, it still has very limited commercial deployment (the reasons for this are discussed later); Mobile IP products are available from Nextel and ip Unplugged, for example. In Mobile IP, a mobile host is always identified by its home address, regardless of its current point of attachment to the Internet. Whilst situated away from its home, a mobile also has another address, called a ‘Care-of Address’ (CoA), which is associated with the mobile’s current location. Mobile IP solves the mobility problem by storing a dynamic mapping between the home IP address, which acts as its permanent identifier, and the care-of address, which acts as its temporary locator. The key functional entity in mobile IP is the Home Agent which is a specialised router that maintains the mapping between a mobile’s home and care-of addresses. Each time the mobile moves on to a new subnet (typically, this means it is moved on to a new Access Router), it obtains a new CoA and registers it with the Home Agent. Mobile IP means that a correspondent host can always send packets to the mobile: the correspondent addresses them to the mobile’s home address – so the packets are routed to the homelink –where the home agent intercepts them and uses IP-in-IP encapsulation (usually) to tunnel them to the mobile’s CoA. (In other words, the home agent creates a new packet, with the new header containing the CoA and the new data part consisting of the complete original packet, i.e. including the original header.) At the other end of the tunnel, the original packet can be extracted by removing the outer IP header (which is called decapsulation). . Note that mobile IP is only concerned with traffic to the mobile – in the reverse direction, packets are sent directly to the correspondent host, which is assumed to be at home. (If it is not, mobile IP must be used in that direction as well.) 3.2.2 Mobile IPv4 The Mobile IPv4 protocol is designed to provide mobility support in an IPv4 network. As well as the Home Agent (HA), it introduces another specialised router, the Foreign Agent (FA). For example, each access router could be a FA. A mobile node (MN) can tell which FA it is ‘on’ by listening to ‘agent advertisements’, which are periodically broadcast by each FA. The advertisement includes the FA’s network prefix. When the MN moves, it will not realise that it has done so until the next time it hears a FA advertisement; it then sends a registration request message. Alternatively, the MN can ask that an agent sends its advertisement immediately, instead of waiting for the periodic advertisement. Figure 3.1 IPv4 Mobile Address Mobile IPv4 comes in two variants, depending on the form of its CoA. In the first, the MN uses the FA’s address as its CoA and the FA registers this ‘foreign agent care-of address’ (FA-CoA) with the HA. Hence, packets are tunnelled from the HA to the FA, where the FA decapsulates and forwards theoriginal packets directly to the MN. In the second variant, the MN obtains a CoA for itself, e.g. through DHCP, and registers this ‘co-located CoA’ (CCoA) either directly with the HA or via the FA. Tunnelled packets from the HA are decapsulated by the MN itself. The main benefit of the FA-CoA approach is that fewer globally routable IPv4 addresses are needed, since many MHs can be registered at the same FA. Since IPv4 addresses are scarce, it is generally preferred. The approach also removes the overhead of encapsulation over the radio link, although, in practice, header compression can be used to shrink the header in either the FA-CoA or CCoA scenario. 3.2.3 Mobile IPv6 Mobile IPv6 is designed to provide mobility support in an IPv6 network. It isvery similar to Mobile IPv4 but takes advantage of various improved features of IPv6 to alleviate (solve) some of Mobile IPv4’s problems. Only CCoAs need to be used, because of the increased number of IPv6 addresses. Figure 3.2 IPv6 MobileAddress There are no foreign agents. This is enabled by the enhanced features ofIPv6, such as Neighbour Discovery, Address Auto-configuration, and theability of any router to send Router Advertisements. Route Optimisation is now built in as a fundamental (compulsory) part ofthe protocol. Route Optimisation binding updates are sent to CNs by theMN (rather than by the home agent). There is no need for reverse tunnelling. The MN’s home address is carriedin a packet in the Home Address destination option7. This allows a MN touse its care-of address as the Source Address in the IP header of packets itsends – and so packets pass normally through firewalls. Packets are not encapsulated, because the MN’s CoA is carried by theRouting Header option added on to the original packet8. This adds lessoverhead costs and possibly simplifies QoS (see later). There is no need for separate control packets, because the DestinationOption allows control messages to be piggybacked on to any IPv6 packet. CHAPTER 4 5G APPLICATIONS 4.1 Application Technologies Application technologies are not and will not be specified by the 3GPP. There are, however, several emerging technologies that will be used to provide common platforms for mobile-telecommunication application development. They are typically promoted by organizations formed by companies in the telecommunications industry. 4.1.1 Wireless Application Protocol The wireless application protocol (WAP) is an HTTP language derivative especially designed for the wireless environment. It will probably be the most widely used protocol in the first phase of UMTS, as it is already used in GSM networks. This head start will gain more momentum once GPRS networks are launched with their faster data capabilities. In plain GSM of today, the WAP applications clearly suffer from the slow data rates. Note that both WAP 2.0 and i-mode will use the same XML language, extensible HTML (XHTML). WAP is promoted by the WAP Forum (www.wapforum.org). 4.1.2 Outlook Web App (OWA) Outlook Web App (OWA) originally called Outlook Web Access and before that Exchange Web Connect (EWC), is a webmail service of Microsoft Exchange Server 5.0 and later. Outlook Web App comes as a part of Microsoft Exchange Server or Microsoft Office 365. Outlook Web App is used to access e-mail (including support for S/MIME), calendars, contacts, tasks, documents (used with SharePoint or in 2010, Office Web Apps), and other mailbox content when access to the Microsoft Outlook desktop application is unavailable. In the Exchange 2007 release, OWA also offers read-only access to documents stored in Microsoft SharePoint sites and network (UNC) shares. Microsoft provides Outlook Web App as part of Exchange Server to allow users to connect remotely via a web browser. Some of the functionality in Outlook is also available in this web "look-alike". The most important difference is that Microsoft Figure 4.1 Outlook Web App (OWA) Outlook allows users to work with e-mail, calendars, etc., even when an internet connection is unavailable, whereas OWA requires an internet connection to function. Outlook Web App also is used to function on Microsoft Office 365, with use of Microsoft Lync and SharePoint document sharing. 4.1.3 BREW Binary Runtime Environment for Wireless (BREW) is a platformindependentapplication execution environment for wireless devices. BREW is developed by Qualcomm. The BREW platform is based on C/C++, which is one of the most popular programming languages, so it is easy for application developers to adopt BREW. The idea behind BREW is very similar to Java, except that BREW is specifically developed for thewireless world, and that there is no need for platform-specific solutions in BREW, such as Java Virtual Machine. BREW is requires little memory on the phone, which makes it more easily ported to low-end mass-market phones. The BREW platform is free for both handset manufacturers and application developers. The business model is based on the licensing fees from network operators. 4.1.4 Bluetooth Bluetooth is a protocol for short-range wireless links. It is not designed onlyfor mobile telecommunication applications; a Bluetooth link can connect several electrical appliances together. A typical Bluetooth usage environment will be the peripheral devices of a PC. Most PC users surely want to get rid of the jungle of wires around their PCs, and Bluetooth will provide an affordable method to do this. In 3G Bluetooth links can be used to connect the UE to various devices, such as headsets, printers, and control devices. Bluetooth is promoted especially by Ericsson; indeed Bluetooth is a trademark owned by Ericsson. 4.1.5 I-mode I-mode is an NTT DoCoMo proprietary data service technology used in 2G PDC networks in Japan. I-mode is very much like what WAP + GPRS will be in the future. It is based on a packet data network (PDC-P), where the usage is charged based on the amount of dataretrieved, and not on the amount of time spent in the network. I-mode is good for accessing Internet Web pages, as the applicable pages can be implemented using a language based on the standard HTML (compact HTML). I-mode has been a phenomenal success in Japan, and proves that there is a real market for mobile Internet services and mobile e-commerce. In June 2002 i-mode had 33 million subscribers, and the number is still growing rapidly. Operators all over the world are following the i-mode service very closely, as it provides them with examples of what kinds of applications could succeed in their own future 3G networks. However, any conclusions should be drawn with caution because in some aspects Japan is quite different from other market areas. The reasons i-mode is popular in Japan may not be the same everywhere. It is good to remember that i-mode data transmission speed is only 9.6 Kbps, so one does not need super-fast data connections for successful mobile services. Good services need good content. 4.1.6 Electronic Payment Electronic money is an important enabler for e-commerce. A service orapplication has to make money. Even the most technically advanced, exciting, and addicting service will fail if it cannot generate revenue. So far the big problem with paying for services in mobile networks has been that there is no convenient method to make very small payments. Most services will be such that a user will not consume it unless it costs a very small sum of money. The business model of such a content provider is based on a large number of customers. The cost of one service usage must be so small that the user does not regard it as a real cost. If a morning cartoon delivered to you handset costs 1 penny, then you do not think twice about subscribing to the service ... if you happen to like cartoons. What we need here are micropayments. These are very small payments, even fractions of the smallest local monetary unit. Currently, e-commerce is typically paid for by giving one’s credit-card number to the content provider. This is not the way to handle micropayments, as the cost of billing is much higher than the cost of the purchased service or goods. Moreover, this is a rather nonsecure way to pay for things. 4.1.7 Radio resource management Radio resource management (RRM) is the system level control of co-channel interference andother radio transmissioncharacteristicsin wirelesscommunication systems, for example cellular networks, wireless networks and broadcasting systems. RRM involves strategies and algorithms for controlling parameters such as transmit power, user allocation, beam forming, data rates, handover criteria, modulation scheme, error coding scheme, etc. The objective is to utilize the limited radio-frequency spectrum resources and radio network infrastructure as efficiently as possible. RRM concerns multi-user and multi-cell network capacity issues, rather than the point-to-point channel capacity. Traditional telecommunications research and education often dwell upon channel coding andsource coding with a single user in mind, although it may not be possible to achieve the maximum channel capacity when several users and adjacent base stations share the same frequency channel. Efficient dynamic RRM schemes may increase the system spectral efficiency by an order of magnitude, which often is considerably more than what is possible by introducing advanced channel coding and source coding schemes. RRM is especially important in systems limited by co-channel interference rather than by noise, for example cellular systems and broadcast networks homogeneously covering large areas, andwireless networks consisting of many adjacent access points that may reuse the same channel frequencies. 4.1.8 6th sense technology SixthSense is a gestural interface device comprising a neckworn pendant that contains both a data projector and camera. Headworn versions were also built at MIT Media Lab in 1997 that combined cameras and illumination systems for interactive photographic art, and also included gesture recognition (e.g. finger-tracking using colored tape on the fingers). SixthSense is a name for extra information supplied by a wearable computer, such as the device called "WuW" (Wear yoUr World) by Pranav Mistry et al., building on the concept of the Telepointer, a neckworn projector and camera combination first proposed and reduced to practice by MIT Media Lab studentSteve Mann. The SixthSense technology contains a pocket projector, a mirror and a camera contained in a head-mounted, handheld or pendant-like, wearable device. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks users' hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tips of the user’s fingers. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. SixthSense supports multi-touch and multi-user interaction. 4.1.9 One Time Password(OTP) A one-time password (OTP) is a password that is valid for only one login session or transaction. OTPs avoid a number of shortcomings that are associated with Figure 4.2 One Time Password(OTP) traditional (static) passwords. The most important shortcoming that is addressed by OTPs is that, in contrast to static passwords, they are not vulnerable to replay attacks. This means that a potential intruder who manages to record an OTP that was already used to log into a service or to conduct a transaction will not be able to abuse it, since it will be no longer valid. On the downside, OTPs are difficult for human beings to memorize. Therefore they require additional technology to work. 4.2 Examples of 5G Applications Everybody involved in 5G businesses should understand that it is not thetechnology people are buying and consuming, but the services and content. There has already been one notable failure in 2G, when WAP was marketed as a technology, but nobody told the customers what they could do with WAP. GPRS has also had a slow adoption process because there have not been enough GPRS-specific services on offer. This shows that customers are not eager to adopt new technologies, but they are looking for new services. I-mode is actually based on a relatively slow transmission technology, but provides great services. 4.2.1 Voice Voice is and remains the most important type of application in mobile telecommunications.However, it will increasingly be combined with other forms of communication to form multimedia communication. Even the pure voice service can provide new possibilities for applications. It is already possible to set up multipoint conference calls, but this has not been widely exploited. And voice mail will be an attractive alternative for text-based mail systems, such as e-mail or SMS. It is also known that the increased usage of other applications also increases voice usage. For example, if you send a electronic picture postcard (MMS) from the Eiffel Tower to your friend in Helsinki, you’ll probably receive a phone call from your envious friend quite soon.In 3GPP networks voice will be transferred in IP packets once the network becomes and all-IP network. This scheme is known as Voice over IP (VoIP). Actually VoIP is not an improvement as such for voice transfer. It is a mechanism that forces voice services into a packet-switched network not optimized to handle them; this had to be accomplished so that packetswitched IP could be used everywhere. We need to remember that circuitswitched systems were originally developed for voice transfer, and decades of optimization have made them very good at this task. Packet-switched systems will have problems with voice; this can be studied next. 4.2.2 Internet Access Internet access is an almost mandatory application for 3G mobile terminals. Over the last decade the Internet has grown to be a very important communication medium, and it continues to grow rapidly. The UMTS Forum estimates in [24] that the Internet will have 500 million users by 2005. Access toa communication medium as important as the Internet must be included in a 3G application portfolio. Fortunately, this access will be relatively easy to implement in a 3G terminal. The 3GPP is specifying an all-IP network, which means that Internet protocols could be used all the way down to the terminal level. A mobile terminal would be an Internet node, just like any PC, with its own IP address number. 4.2.3 Location-Based Applications The UTRAN system will contain several methods to determine the UE’s location. This makes it possible to provide a totally new class of applications to mobile phone users: applications that use the knowledge of the mobile user’s location. The location data itself can be useful information for the user. The mobile terminal can inform a lost user about his or her location. In practice, the plain coordinates as such do not help much; they must be accompanied by some additional information, like the local map or instructions about how to get to a desired location from the current position. It is easy for the operator to provide the user with a local digital map, even if the user is lost, because his location is known to the operator. The map could even be sponsored by a local business, which could advertise itself in the map. Location-specific billing is an application that could be used by a network operator. It could introduce a scheme where the usage of the mobile phone is cheaper from certain locations, for example, from the home or an office. For this kind of application, the location estimate does not have to be very accurate; a cell-coverage-based location method is adequate. 4.2.4 Games Games will be another major application segment in 5G. Most people do not admit that they like playing computer games, but despite this, the games are still selling extremely well. New service environments are typically justified by sensible applications, such as banking, share trading, and ticket purchases. In practice, most usage and revenue come from entertainment.There is no reason to prevent mobile entertainment from reaching a strong position in 5G. The success of entertainment and especially pornography in the terrestrial Internet predicts that this will happen. Currently, the games some mobile terminals contain are pretty simple. The small displays and limited input devices severely restrict the applications. With bigger displays, more powerful processors, and (possibly) special 5G game terminals, most of these limiting factors will disappear. Compared to desktop PCs, mobile terminals will always lag behind in screen resolution, processor speed, and sound quality. Thus, the wireless industry has to find applications that will exploit the special characteristics of the mobile communication environment. Networked games are played against other players over the network. These games are obviously in the interest of operators, as they eat up airtime. An operator could also set up automated machine players with varying skill levels, so that there would always be an opponent available. A simple example of networked games could be card games. These would be quite suitable for the 3G systems, as the game would not need a circuit-switched connection, but each played card could be sent via a packet-switched channel. Also the required graphics in the terminal could be fairly simple. The terminal manufacturers could also provide Bluetooth links for “local” multiplayer games. This solution would not consume any airtime; thus, operators would not support such games. 4.2.5 Advertising Generally, it is not a very good idea to send an advertisement to all subscribersin the network. Most subscribers are probably not at all interested in a certain advertisement and will regard it as junk mail. This will only irritate the customers, and they may change their operator because of this practice. Moreover, sending large amounts of advertisements will use a lot of capacity; thus, it is expensive. The operator should therefore build an accurate user profile database and send the advertisement to only those customers who are most likely to be interested in it. For example, if a customer has bought several books using his mobile device, it is probable that he is interested in receiving information on book sales or new books on the same subject. Of course, the user must be given the last word in advertising. It must be possible for a user to reject all advertising. On the other hand, the operator can make the reception of advertisements more attractive, for example, by lowering the subscription charges. Note that the operator can put a nice price tag on this kind of targeted advertising. For example, an ink-jet cartridge shop would pay a lot for an advertising campaign that is directed to 5G customers that bought an ink-jet printer about six months earlier. 4.2.6 Betting and Gambling This is yet-another interesting 5G application type. Especially in East Asia, betting is very popular, and many customers would certainly subscribe to 3G just because of this application. A betting application combined with a 5G payment card provides completely new possibilities. A user could follow a horse race from the stand and bet on the winning horse without leavinghis seat. The bet is instantly deducted from his payment card and possible prizes could similarly be paid back in real-time. The user can also watch the race from in his armchair at home and make bets as if he were actually at the horse race. Various game shows on TV could include viewer betting. However, there could be problems even in the most capable 3G network if 10 million viewers send in their bets at the same time. Prepaid customers may not be allowed to enjoy this service. This is not because of missing credit history. If there is credit left in the prepaid account, then it could be used for gambling in principle. However, in most countries betting and gambling are forbidden for minors; thus, the identity of the customer must be known. This is not the case with normal prepaid customers. 4.2.7 Dating Applications These are already very popular in Japan. Many people prefer to get to know other people without revealing their own identity first. The technical implementation of a dating application may vary. It can be a simple bulletin board with dating adverts combined with an anonymous e-mail server, or it could be a lonely-hearts mobile chat room. Users can also set up their own profiles (or the profile for the company they seek), and wait until the matchmaker application finds a suitable victim. Dating ads may also include still images and audio clips. 4.2.8 Adult Entertainment This is the most profitable entertainment business in general, and it will remain so in 3G. The premiums will be very high. It will be interesting to see how operators deal with adult entertainment. It is a lucrative market, and they would certainly want to take their share of the profit somehow. On the other hand, in some countries there may be regulations preventing operators from providing this kind of service, or it is simply not socially or politically acceptable for an operator to do so. In any case, it will be very difficult to censor adult entertainment services because, in most countries, these will be legal or only modestly regulated. CHAPTER 5 CONCLUSIONS 5G technology going to be a new mobile revolution in mobile market. Through 5G technology now you can use worldwide cellular phones and this technology also strike the china mobile market and a user being proficient to get access to Germany phone as a local phone . With the coming out of cell phone alike to PDA now your whole office in your finger tips or in your phone. 5G technology has a bright future because it can handle best technologies and offer priceless handset to their customers. As data traffic has tremendous growth potential, under 4G existing voice centric telecom hierarchies will be moving flat IP architecture where, base stations will be directly connected to media gateways. 5G will promote concept of Super Core, where all the network operators will be connected one single core and have one single infrastructure, regardless of their access technologies. 5G will bring evaluation of active infra sharing and managed services and eventually all existing network operators will be MVNOs (Mobile virtual network operators) REFERENCES [1].Akhilesh Kumar Pachuri And Ompal Singh,“5G Technology -Redefining Wireless Communication In Upcoming Years ” , International Journal of Computer Science And Management Research,Vol 1,Issue 1, ISSN 2278-733X, Aug 2012. [2]. Cornelia-IonelaBadoi, Neeli Prasad, Victor Croitoru and Ramjee Prasad,“5G based cognitive radio” ,Wireless Personal Communications, Volume 57, Number 3. pp. 441–464 10.1007/s11277-010-0082-9, Springer, Retrieved 27 September 2013.  [3]. Xichun Li, AbudullaGani, RosliSalleh and Omar Zakaria ,  “The Future of Mobile Wireless Communication Networks” . International Conference on Communication Software and Networks. ISBN 978-0-7695-3522-7.Retrieved 27 September 2013. [4].“Huawei plans $600m investment in 10Gbps 5G network”,6 November 2013. Retrieved 11 November 2013. [5] Leonardo S. Cardoso, Marco Maso, Mari Kobayashi and MérouaneDebbah (July 2011).“Orthogonal LTE two-tier Cellular Networks” .2011 IEEE International Conference on Communications (ICC). pp. 1–5. Retrieved 27 September 2013. [6] J. Hoydis, S. ten Brink and M. Debbah (February 2013).“Massive MIMO in the UL/DL of Cellular Networks: How Many Antennas Do We Need”. IEEE Journal on Selected Areas in Communications, vol. 31, no. 2. Bell Labs., Alcatel-Lucent. pp. 160–171. Retrieved 27 September 2013. 38