Chapter 3-1
Chapter 3-1
Chapter 3-1
Figure 3.1.Basic
Elements of a Communication System
Source: entity that generates data; eg. a person who speaks into the phone, or a computer sending data to the
modem.
Transmitter: a transmitter converts the electrical signal into a form that is suitable for transmission
through the physical channel or transmission medium. In general, a transmitter performs the matching of
the message signal to the channel by a process called modulation.
The choice of the type of modulation is based on several factors, such as:
- the type of noise and interference that the signal encounters in transmission over the channel,
- and the electronic devices that are available for signal amplification prior to transmission.
Transmission System Channel: the communication channel is the physical medium that connects the
transmitter to the receiver. The physical channel may be a pair of wires that carry the electrical signals, or
an optical fiber that carries the information on a modulated light beam or free space at which the
information-bearing signal are electromagnetic waves.
Receiver: the function of a receiver is to recover the message signal contained in the received signal.
The main operations performed by a receiver are demodulation, filtering and decoding.
Destination: entity that finally uses the data.
Example: - computer on other end of a receiving modem.
Information: generated by the source may be in the form of voice, a picture or a plain text. An essential
feature of any source that generates information is that its output is described in probabilistic terms; that is, the
output is not deterministic. A transducer is usually required to convert the output of a source in an electrical
signal that is suitable for transmission.
As a matter of fact, both analog and digital signals can take one of the following signal forms
Periodic Signal
This signal completes a pattern with in a measurable time frame and repeats the pattern over
subsequent identical periods. The process of completion of one full pattern is known as cycle.
Aperiodic Signal
An Aperiodic signal changes without exhibiting (showing) a pattern or cycle that repeats over time.
Analog signal
Analog data takes on continuous values on some interval. The most familiar examples of analog data are audio,
video, the outputs of many sensors such as temperature and pressure sensors, etc.
In most cases we use periodic analog signals. Associated with this there are some basic terms in the case of
analog signals such as: -
Peak Amplitude: It describes the absolute value of signal’s highest intensity which is proportional to
the energy it carries.
Period: It is the amount of time required by a signal to complete one cycle. Basically period is
expressed in seconds
Frequency: It refers to the number of periods per second.
Figure 3.2. Analog Signal
Frequency is the number of times per second that the wave cycle repeats or oscillates. Period and frequency
have inversely relationship. Shorter wave length produce higher frequencies b/c the waves are closer together
and vice versa. The unit used to measure the frequency is Hertz which means cycles/second. The unit can be
expanded by adding prefixes as follows.
Bandwidth Capacity
Bandwidth is the transmission capacity of a communications channel. In the world of networking, bandwidth
is measured in terms of megabits per second. A medium with a high capacity has a high bandwidth; a medium
with a low capacity has a low bandwidth.
Digital signal
A signal which is discrete with respect to time is called digital signal. Such signal can be modeled using binary
number system as shown in the figure below.
Note: The bandwidth of digital signals is usually measured in bits per second (BPS). The unit can be expanded
by adding prefixes as follows.
As discussed latter analog signals are used to represent analog data and digital signals are used to represent
digital data. Both of the signals can be transmitted by means of analog signal transmission and digital signal
transmission.
In either cases, the analog signal will become weaker (attenuate) after a certain distance. To achieve
longer distances, the analog transmission system includes amplifiers that increase the energy of the
signal and the noises. Due to this the signals transmitted by analog transmission are exposed to high
noise.
Transfer of data units through a circuit switched network involves three operational phases
The entry node sets up a path by cross connecting one of the outgoing trunk circuits in the direction of the
destination end system
The address information is then transferred to the next node where again a cross connection b/n the
incoming & outgoing trunk is established
This proses is repeated at each intermediate node & at the exit node which serves the destination end
system
The exit node sends incoming call indication to the destination end system that returns a call acceptance.
Then, the network confirms establishment of the connections the call originating end system
The network resources allocated for the purpose of setting up the connection are for an exclusive use of the
end systems to transport their data
– The data rates at the source & destination remains the same
Address of the destination is specified only once during call set up. All subsequent data units are
transmitted on the path already established
• Advantage:
– Since there is a dedicated transmission channel, it provides good data rate (there is Guaranteed
connection)
• Limitation:
– Since the connection is dedicated, it cannot be used to transmit any other data even if the channel
is free (it is Inefficient)
Message switching establishes a connection between two devices, transfers the information to the second
device, and then breaks the connection. The information is stored and forwarded from the second device after
a connection between that device and a third device on the path is established. This “store and forward”
routine continues until the message reaches its destination. All information follows the same physical path;
unlike with circuit switching, however, the connection is not continuously maintained. Message switching
requires that each device in the data’s path has sufficient memory and processing power to accept and store
the information before passing it to the next node.
Each message is treated as an independent unit & includes its own destination & source address. Each
complete message is then transmitted from device to device through the internetwork. Each intermediate
device receives the message, stores it, until the next device is ready to receive it & then forwards it to the next
device. As result, this network sometimes is known as store & forward network Example. TELEGRAPH networks
Advantage
Storing and forwarding introduces delay, hence cannot be used for real time applications like voice and
video.
Intermediate devices require a large storing capacity since it has to store the message unless a free path is
available.
When packets reach their destination node, the node reassembles them based on their control information. Because of
the time it takes to reassemble the packets into a message, packet switching is not optimal for live audio or video
transmission. Nevertheless, it is a fast and efficient mechanism for transporting typical network data, such as e-mail
messages, spreadsheet files, or even software programs from a server to client. The greatest advantage to packet
switching lies in the fact that it does not waste bandwidth by holding a connection open until a message reaches its
destination, as circuit switching does. And unlike message switching, it does not require devices in the data’s path to
process any information. Ethernet networks and the Internet are the most common examples of packet-switched
networks.
In a packet-switching network there is no direct connection from point A to point B. Rather than a direct
connection, the packet-switching network has a mesh of paths between the two points (see the following
Figure).
o A packet contains: information about source and target address, length of data, packet
sequence number, flags to indicate beginning and end, etc.
o One packet doesn’t necessarily follow the path followed by another packet.
There are actually two types of packet switching. Connection-oriented (CO), or virtual, circuits, a route across
the network is established and all packets of data follow that route. In CO packet switching, the sender will first
requests a connection to the receiver, and waits for the connection to be established. Once it is established the
virtual connection is left in place whilst the data is transmitted. When data transmission is complete, the
connection is given up. You can think of CO packet switching as being like a telephone call: when you are
talking to somebody on the phone, all information is sent by the same route over the telephone network.
When you put the phone down, the connection is given up. A specific case of CO packet switching is the virtual
private network (VPN). A VPN is a private network that uses public network infrastructure. A series of
encrypted logical connections (or tunnels) are made across the public network, enabling computers in different
parts of the world to communicate as if they were on a private network. It similar to circuit switching
In connectionless (CL) circuits, no pre-determined route exists, and each packet is routed independently. In this
case, the sender simply prepares the packet for transmission, adds the destination address, and sends it onto
the network. The network hardware will then use the address to route the packet in the best way that it can.
You can think of CL packet switching as being like posting a series of letters: the postal service will send each
letter independently, and you cannot be sure that each letter will follow the same physical route. It is Similar to
message switching.
CO packet switching has more ‘overheads’: before transmission can start time must be spent setting up the
virtual connection across the network, and after it has finished more time must be spent closing the
connection. However, once transmission has commenced, bandwidth can be reserved so it is possible to
guarantee higher data rates, which is not possible with CL packet switching. Therefore CO packet switching is
well suited to real-time application such as streaming of video and/or sound. On the other hand, CL packet
switching is simpler, has fewer overheads, and allows multicast and broadcast addressing.
A routing table is stored in the RAM of a network device such as a bridge, switch or router, and contains
information about where to forward data to, based on its destination address. For example, Figure 1 shows a
simple network consisting of 3 switches and 6 computers. Each switch connects a different sub-network. Each
computer has an address consisting of a network number followed by a computer number (e.g. computer A is
in network number 1 and has computer number 2). The routing table is shown for switch 3, and indicates
where the next destination (or next hop) should be for reaching each address on the network.
For instance, if computer C sends data to computer A, then switch 3 will first look at the destination address (1,
2), and then look up this address in its routing table. It finds that the next hop for this address is port 5 of the
switch, and so sends the data to this port and no other.
Figure 3.X
The next question is how the data is in the routing table determined. We will now look at some of the common
strategies used to route packets in packet switching networks. We will first survey some of the key
characteristics of such strategies, and then examine some specific routing strategies
Characteristics of routing strategies
The primary function of a packet switching network is to accept packets from a source station and deliver them
to a destination station. To accomplish this a route through the network must be established. Often, more than
one route is possible. Thus, the ‘best’ route must be determined. There are a number of requirements that this
decision should take into account:
Correctness
Simplicity
Robustness
Stability
Fairness
Optimality
Efficiency
The first two requirements are straightforward: correctness means that the route must lead to the correct
destination; and simplicity means that the algorithm used to make the decision should not be too complex.
Robustness has to do with the ability of the network to cope with network failures and overloads. Ideally, the
network should react to such failures without losing packets and without breaking virtual circuits. Stability
means that the network should not overreact to such failures: the performance of the network should remain
reasonably stable over time. A tradeoff exists between fairness and optimality. The optimal route for one
packet is the shortest route (measured by some performance criterion). However, giving one packet its optimal
route may adversely affect the delivery of other packets. Fairness means that overall most packets should have
a reasonable performance. Finally, any routing strategy involves some overheads and processing to calculate
the best routes. Efficiency means that the benefits of these overheads should outweigh their cost.
The selection of a route is generally based on some performance criterion. The simplest criterion to use is the
smallest number of hops between the source and destination. A hop generally refers to a journey between two
network nodes. A network node could be a computer, router, or other network device. A slightly more
advanced technique is to assign a cost to each link in the network. A shortest path algorithm can then be used
to calculate the lowest cost route. For example, in Figure 2, there are 5 network devices. The weighted edges
between them represent the costs of the connections. To send data from device 1 to device 5, the shortest
path is via devices 3 and 4. However, the shortest number of hops would be via device 3 only. The cost used
could be related to the throughput (i.e. speed) of the link, or related to the current queuing delay on the link.
Two key characteristics of the routing decision are when and where it is made. The decision time is determined
by whether we are using a CO network or a CL network. For CL networks the route is established
independently for each packet. For CO networks the route is established once at the time the virtual circuit is
set up. The decision place refers to which node(s) are responsible for the routing decision. The most common
technique is distributed routing, in which each node has the responsibility to forward each packet as it arrives.
For centralized routing all routing decisions are made by a single designated node. The danger of this approach
is that if this node is damaged or lost the operation of the network will cease. In source routing, the routing
path is established by the node that is sending the packet.
Almost all routing strategies will make their routing decisions based upon some information about the state of
the network. The network information source refers to where this information comes from, and the network
information update timing refers to how often this information is updated. Local information means just using
information from outgoing links from the current node. An adjacent information source means any node which
has a direct connection to the current node. The update timing of a routing strategy can be continuous
(updating all the time), periodic (every t seconds), or occur when there is a major load or topology change.
How that we are familiar with some of the characteristics and elements of routing strategies, we will examine
some specific examples.
In fixed routing, a single, permanent route is established for each source-destination pair in the network. We
say in this case that the routing table of each network device is static, i.e. it will not change once assigned.
These routes can be calculated using a shortest path algorithm based on some cost criterion, or for simple
networks they can be assigned manually by the network administrator. Fixed routing is a simple scheme, and it
works well in a reliable network with a stable load. However, it does not respond to network failures, or
changes in network load (e.g. congestion).
3.4.2.2 Flooding
Another simple routing technique is flooding. This technique requires no network information at all, and works
as follows. A packet is sent by the source to each of its adjacent nodes. At each node, incoming packets are
retransmitted to every outgoing link apart from the one on which it arrived. If/when a duplicate packet arrives
at a node, it is discarded. This identification is made possible by attaching a unique identifier to each packet.
With flooding, all possible routes between the source and the destination are tried. Therefore so long as a path
exists at least one packet will reach the destination. This means that flooding is a highly robust technique, and
is sometimes used to send emergency information. Furthermore, at least one packet will have used the least
cost route. This can make it useful for initializing routing tables with least cost routes. Another property of
flooding is that every node on the network will be visited by a packet. This means that flooding can be used to
propagate important information on the network, such as routing tables.
A major disadvantage of flooding is the high network traffic that it generates. For this reason it is rarely used
on its own, but as described above it can be a useful technique when used in combination with other routing
strategies.
Random routing has the simplicity and robustness of flooding with far less traffic load. With random routing,
instead of each node forwarding packets to all outgoing links, the node selects only one link for transmission.
This link is chosen at random, excluding the link on which the packet arrived. Often the decision is completely
random, but an refinement of this technique is to apply a probability to each link. This probability could be
based on some performance criterion, such as throughput.
Like flooding, random routing requires the use of no network information. The traffic generated is much
reduced compared to flooding. However, unlike flooding, random routing is not guaranteed to find the
shortest route from the source to the destination.
In almost all packet switching networks some form of adaptive routing is used. The term adaptive routing
means that the routing decisions that are made change as conditions on the network change. The two
principle factors that can influence changes in routing decisions are failure of a node or a link, and congestion
(if a particular link has a heavy load it is desirable to route packets away from that link).
For adaptive routing to be possible, information about the state of the network must be exchanged among the
nodes. This has a number of disadvantages. First, the routing decision is more complex, thus increasing the
processing overheads at each node. Second, the information that is used may not be up-to-date. To get up-to-
date information means requires continuous exchange of routing information between nodes, thus increasing
network traffic. Therefore there is a tradeoff between quality of information and network traffic overheads.
Finally, it is important that an adaptive strategy does not react to slowly or too quickly to changes. If it reacts
too slowly it will not be useful. But if it reacts too quickly it may result in an oscillation, in which all network
traffic makes the same change of route at the same time.
However, despite these dangers, adaptive routing strategies generally offer real benefits in performance,
hence their popularity. Two examples of adaptive routing strategies are distance-vector routing and link-state
routing.
Using the distance vector technique network devices periodically exchange information about their routing
tables. The exchange of information typically takes place every 30 seconds, is two-way and consists of the
entire routing table. The routing table contains a list of destinations, together with the corresponding next hop
and the distance to the destination. The measure of distance is usually simplified so that each hop represents a
distance of 1. Upon receiving the routing table from a neighbouring device, each device will compare the
information it receives with its own routing table and update it if necessary. The distance vector technique is
simple to implement but it has a number of weaknesses. First, it cannot distinguish between fast and slow
connections, and second it takes time to broadcast the entire routing table around the network.
In the link state technique, each network device periodically tests the speed of all of its links. It then
broadcasts this information to the entire network. Each device can therefore construct a graph with weighted
edges that represent the network connectivity and performance (e.g. see Figure 2). The device can then use a
shortest path algorithm such as Dijkstra’s agorithm to compute the best route for a packet to take.
Ethernet has become the most popular way of networking desktop computers and is still very commonly used
today in both small and large network environments.
Standard specifications for Ethernet networks are produced by the Institute of Electronic and Electrical
Engineers (IEEE) in the USA, and there have been a large number over the years. The original Ethernet
standard used a bus topology, transmitted at 10 Mbps, and relied on CSMA/CD to regulate traffic on the main
cable segment. The Ethernet media was passive, which means it required no power source of its own and thus
would not fail unless the media is physically cut or improperly terminated. More recent Ethernet standards
have different specifications.
Packets in Ethernet networks are referred to as frames. The format of an Ethernet frame has remained largely
the same throughout the various standards produced by the IEEE, and is shown below.
ETHERNET FRAME FORMAT
Each frame begins with a 7-byte preamble. Each byte has the identical pattern 10101010, which is used to help
the receiving computer synchronise with the sender. This is followed by a 1-byte start frame delimiter (SFD),
which has the pattern 10101011. Next are the source and destination addresses, which take up 6 bytes each.
The data can be of variable length (46-1500 bytes), so before the data itself there is a 2-byte field that indicates
the length of the following data field. Finally there is a 4-byte frame check sequence, used for cyclic
redundancy checking. Therefore the minimum and maximum lengths of an Ethernet frame are 72 bytes and
1526 bytes respectively.
Although there have been a number of different standards for the Ethernet architecture over the years, a
number of features have remained the same The table below summarises the general features of Ethernet
LANs.
FEATURE Description
Linear bus
Traditional
topology
Other topologies Star bus
The first phase of Ethernet standards had a transmission speed of 10Mbps. Three of the most common of
these are known as 10Base2, 10Base5 and 10BaseT. The following table summarises some of the features of
each specification.
ETHERNET STANDARDS
10Base2 10Base5 10BaseT
Topology Bus Bus Star bus
UTP
Cable type Thinnet coaxial Thicknet coaxial
(Cat. 3 or higher)
Simplex/half/full
Half duplex Half duplex Half duplex
duplex
Manchester, Manchester, Manchester,
Data encoding
asynchronous asynchronous asynchronous
Connector BNC DIX or AUI RJ45
Max. segment length 185 metres 500 metres 100 metres
Note that although the 10BaseT standard uses a physical star-bus topology, it still used a logical bus topology.
This combination is sometimes referred to as a “star-shaped bus”. In addition to these three, a number of
standard existed for use with fibre-optic cabling, namely 10BaseFL, 10BaseFB and 10BaseFP.
The next phase of Ethernet standards was known as fast Ethernet, and increased transmission speed up to
100Mbps. Fast Ethernet is probably the most common standard in use today. The Manchester encoding
technique used in the original Ethernet standards is not well suited to high frequency transmission so new
encoding techniques were developed for fast Ethernet networks. Three of the most common fast Ethernet
standards are summarised below, although others do exist (e.g. 100BaseT2).
The most recent phase of Ethernet standards has increased transmission speeds up to 1000Mbps, although
sometimes at the expense of some other features, such as maximum segment length. Because of the
transmission speed, it has become known as Gigabit Ethernet, and the most common standards are
summarised below.
Finally, the IEEE has also published a number of standards for wireless Ethernet networks. The original
standard was known as 802.11, was very slow (around 2Mbps) and was quickly superseded by more efficient
standards. 802.11 now usually refers to the family of standards that followed after this original standard.
WIRELESS ETHERNET STANDARDS
802.11b 802.11a 802.11g
Max. speed 11Mbps 54Mbps 54Mbps
Ave. speed 4.5Mbps 20Mbps 20Mbps
Max. distance
120m 30m 30m
outdoors
Max. distance
60m 12m 20m
indoors
Broadcast
2.4Ghz 5Ghz 2.4Ghz
frequency
The CSMA/CA access method has become the standard access method for use in wireless networking.