MULTITERABITNETWORKS
MULTITERABITNETWORKS
MULTITERABITNETWORKS
The explosive demand for bandwidth for data networking applications continues
to drive photonics technology toward ever increasing capacity in the backbone fiber
network and toward flexible optical networking. Already commercial Tb/s (per fiber)
transmission systems have been announced, and it can be expected that in the next
several years, we will begin to be limited by the 50 THz transmission bandwidth of silca
optical fiber. Efficient bandwidth utilization will be one of the challenges of photonics
research. Since the communication will be dominated by data, we can expect the network
of the future to consist of multiterabit packet switches to aggregate traffic at the edge of
the network and cross connects with wavelength granularity and tens of terabits
throughout the core.
The infrastructure required to govern Internet traffic volume, which doubles every
six months, consists of two complementary elements: fast point-to-point links and high-
capacity switches and routers. Dense wavelength division multiplexing (DWDM)
technology, which permits transmission of several wave-lengths over the same optical
media, will enable optical point-to-point links to achieve an estimated 10 terabits per
second by 2008. However, the rapid growth of Internet traffic coupled with the avail-
ability of fast optical links threatens to cause a bottleneck at the switches and routers.
2. DWDM
2.1 Options for Increasing Carrier Bandwidth
Faced with the challenge of dramatically increasing capacity while constraining
costs, carriers have two options: Install new fiber or increase the effective bandwidth of
existing fiber. Laying new fiber is the traditional means used by carriers to expand their
networks. Deploying new fiber, however, is a costly proposition. It is estimated at about
$70,000 per mile, most of which is the cost of permits and construction rather than the
fiber itself. Laying new fiber may make sense only when it is desirable to expand the
embedded base. Increasing the effective capacity of existing fiber can be accomplished in
two ways:
o Increase the bit rate of existing systems.
o Increase the number of wavelengths on a fiber.
2.2 Increase the Bit Rate
Using TDM, data is now routinely transmitted at 2.5 Gbps (OC-48) and,
increasingly, at 10 Gbps (OC-192); recent advances have resulted in speeds of 40 Gbps
(OC-768). The electronic circuitry that makes this possible, however, is complex and
costly, both to purchase and to maintain. In addition, there are significant technical issues
that may restrict the applicability of this approach. Transmission at OC-192 over single-
mode (SM) fiber, for example, is 16 times more affected by chromatic dispersion than the
next lower aggregate speed, OC-48. The greater transmission power required by the
higher bit rates also introduces nonlinear effects that can affect waveform quality.
Finally, polarization mode dispersion, another effect that limits the distance a light pulse
can travel without degradation.
2.3 Increase the Number of Wavelengths
In this approach, many wavelengths are combined onto a single fiber. Using
wavelength division multiplexing (WDM) technology several wavelengths, or light
colors, can simultaneously multiplex signals of 2.5 to 40 Gbps each over a strand of fiber.
Without having to lay new fiber, the effective capacity of existing fiber plant can
routinely be increased by a factor of 16 or 32. Systems with 128 and 160 wavelengths are
in operation today, with higher density on the horizon. The specific limits of this
technology are not yet known.
Multiterabit Networks
Another fundamental difference between the two technologies is that WDM can carry
multiple protocols without a common signal format, while SONET cannot.
2.6 Why DWDM?
From both technical and economic perspectives, the ability to provide potentially
unlimited transmission capacity is the most obvious advantage of DWDM technology.
The current investment in fiber plant can not only be preserved, but optimized by a factor
of at least 32. As demands change, more capacity can be added, either by simple
equipment upgrades or by increasing the number of lambdas on the fiber, without
expensive upgrades. Capacity can be obtained for the cost of the equipment, and existing
fiber plant investment is retained. Bandwidth aside, DWDM's most compelling technical
advantages can be summarized as follows:
Transparency—Because DWDM is a physical layer architecture, it can transparently
support both TDM and data formats such as ATM, Gigabit Ethernet, and Fibre Channel
with open interfaces over a common physical layer.
Scalability—DWDM can leverage the abundance of dark fiber in many metropolitan area
and enterprise networks to quickly meet demand for capacity on point-to-point links.
Dynamic provisioning—Fast, simple, and dynamic provisioning of network connections
give providers the ability to provide high-bandwidth services in days rather than months.
range). In its most general form, optimal routing involves forwarding a packet from
source to destination using the "best" path.
3.2 Requirements
These observations suggest that an open systems routing architecture should:
1. Scale well
2. Support many different subnetwork types and multiple qualities of service
Adapt to topology changes quickly and efficiently (i.e., with minimum overhead and
complexity)
3. Provide controls that facilitate the "safe" connection of multiple Organizations
It is not likely that the manual administration of static routing tables (the earliest
medium for the maintenance of internetwork routes, in which a complete set of fixed
routes from each system to every other system was periodically-often no more frequently
than once a week -loaded into a file on each system) will satisfy these objectives for a
network connecting more than a few hundred systems. A routing scheme for a large-scale
open systems network must be dynamic, adaptive, and decentralized; be capable of
supporting multiple paths offering different types of service; and provide the means to
establish trust, firewalls, and security across multiple administrations (see ISO/IEC TR
9575, the OSI Routing Framework).
3.3 OSI Routing Architecture
The architecture of routing in OSI is basically the same as the architecture of
routing in other connectionless (datagram) networks, including TCP/IP. As usual,
however, the conceptual framework and terminology of OSI are more highly elaborated
than those of its roughly equivalent peers, and thus, it is the OSI routing architecture that
gets the lion's share of attention here. Keep in mind that most of what is said about the
OSI routing architecture applies to hop-by-hop connectionless open systems routing in
general. The OSI routing scheme consists of: A set of routing protocols that allow end
systems and intermediate systems to collect and distribute the information necessary to
determine routes
* A routing information base containing this information, from which routes between end
systems can be computed. (Like a directory information base, the routing information
base is an abstraction; it doesn't exist as a single entity. The routing information base can
Multiterabit Networks
all the information necessary to specify routes from any "here" to any "there" in the entire
global Internet. Neither is it possible to design a single routing protocol that operates well
both in local environments (in which it is important to account quickly for changes in the
local network topology) and in wide area environments (in which it is important to limit
the percentage of network bandwidth that is consumed by "overhead" traffic such as
routing updates).
3.4 Router Functions
Functions of a router can be broadly classified into two main categories:
1. Datapath Functions : These functions are applied to every datagram that reaches the
router and successfully routed without being dropped at any stage.
Main functions included in this category are the forwarding decision, forwarding through
the backplane and output link scheduling.
2. Control Functions : These functions include mainly system configuration,
management and update of routing table information. These do not apply to every
datagram and therefore performed relatively infrequently.
Goal in designing high speed routers is to increase the rate at which datagrams are routed
and therefore datapath functions are the ones to be improved to enhance the performance.
Here we discuss briefly about the major datapath functions .
* The Forwarding Decision : Routing table search is done for each arriving datagram
and based on the destination address, output port is determined. Also, a next-hop MAC
address is appended to the front of the datagram, the time-to-live(TTL) field of the IP
datagram header is decremented, and a new header checksum is calculated.
* Forwarding through the backplane : Backplane refers to the physical path between the
input port and the output port. Once the forwarding decision is made, the datagram is
queued before it can be transferred to the output port across the backplane. If there is not
enough space in the queues, then it might even be dropped.
* Output Link Scheduling : Once a datagram reaches the output port, it is again queued
before it can be transmitted on the output link. In most traditional routers, a single FIFO
queue is maintained. But most advanced routers maintain separate queues for different
flows, or priority classes and then carefully schedule the departure time of each datagram
in order to meet various delay and throughput guarantees.
Multiterabit Networks
Line Card
Buffer
Memory Line Card
* Guarantee short deterministic delay : Real time voice and video traffic require short
and predictable delay through the system. Unpredictable delay results in a discontinuity
which is not acceptable for these applications.
* Quality of Service : Routers must be able to support service level agreements,
guaranteed line-rate and differential quality of service to different applications, or flows.
This quality of service support must be configurable.
* Multicast Traffic : Internet traffic is changing from predominantly point-to-point to
multicast and therefore routers must support large number of multicast transmissions
simultaneously.
*High Availability : High speed routers located in the backbones handle huge amounts of
data and can not be turned down for upgrades etc. Therefore features such as hot-
swappable software tasks- allowing in-service software upgrades are required.
the capacity and robustness of an all-optical switch with the intelligence of a data router.
Signals that pass through the device require no conversion from optical to electrical
representation.
3. 9 QoS classes
implement smarter scheduling schemes, a difficult task given that routers usually
configure the crosspoint and transmit data on a per data unit basis. The scheduling
algorithm must make a new configuration decision with each incoming data unit. In an
ATM switch, where the data unit is a 53byte cell, the algorithm must issue a scheduling
decision every 168 ns at 2.5Gbitpersecond line rates. As 10 Gbit per second port rates
become standard for high-end routers, the decision time reduces fourfold, to a mere 42
ns.
4. QUEUING STRATEGIES
The switch fabric core embodies the crosspoint element responsible for matching
N input and output ports. Currently, routers incorporate electrical crosspoints but optical
crosspoint solutions, such as those based on dynamic DWDM, show promise. Regardless
of the underlying technology, the basic functionality of determining the crosspoint
configuration and transmitting data remains the same.
4.1 Input queuing
We can generally categorize switch queuing architectures as input or output
queued. Network processors store arriving packets in input queued switches in FIFO
buffers that reside at the input port until the processor signals them to traverse the
crosspoint .A disadvantage of input queuing is that a packet at the front of a queue can
prevent other packets from reaching potentially available destination or egress ports, a
phenomenon called head of line (HOL) blocking. Consequently, the overall switching
throughput degrades significantly: For uniformly distributed Bernoulli iid traffic flows,
the maximum achievable throughput using input queuing is 58 percent of the switch core
capacity. Virtual output queuing (VOQ) entirely eliminates HOL blocking. As Fig 6.
shows, every ingress port in VOQ maintains N separate queues, each associated with a
different egress port. The network processor automatically classifies and stores packets
upon arrival in a queue corresponding to their destination. VOQ thus assures that held
back packets do not block packets destined for available outputs Assigning several
Multiterabit Networks
prioritized queues instead of one queue to each egress port renders per class QoS
differentiation.
5. SCHEDULING APPROACHES
The main challenge of packet scheduling is designing fast yet clever algorithms to
determine input output matches that, at any given time:
• maximize switch throughput utilization by matching as many input output pairs as
possible,
• Minimize the mean packet delay as well as jitter,
• Minimize packet loss resulting from buffer overflow, and
• Support strict QoS requirements in accordance with diverse data classes.
Intuitively, these objectives appear contradictory. Temporarily maximizing input
output matches, for example, may not result in optimal bandwidth allocation in terms of
QoS, and vice versa. Scheduling is clearly a delicate task of assigning ingress ports to
egress ports while optimizing several performance parameters. Moreover, as port density
and bit rates Increase, the scheduling task becomes increasingly complex because more
decisions must be made during shorter time frames. Advanced scheduling schemes
exploit concurrency and distributed computation to offer a faster, more efficient decision
process.
Multiterabit Networks
• Accept. Each input selects one granting output according to a predefined priority order.
A unique pointer indicates the position of the highest priority element and
increments (Modulo N) to one location beyond the accepted output. Instead of updating
after every grant, the outer pointer updates only if an input accepts the grant. iSLIP
significantly reduces pointer synchronization and accordingly increases throughput with a
lower average packet delay. The algorithm does, however, suffer from degraded
performance in the presence of non-uniform and burst traffic flows, lack of inherent QoS
support, and limited scalability with respect to high port densities. Despite its
weaknesses, iSLIP’s low implementation complexity promotes its extensive deployment
side by side with various crosspoint switches.
Multiterabit Networks
6. Conclusion
7. References