Chapter 3-1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

Chapter 3: Data Communication & Switching

3.1. Data Communication Fundamentals


As discussed in the previous chapter, data Communication is the transmission of signals (transfer of
information) in a reliable and efficient manner from one device to the other.

COMMUNICATION MODEL (SYSTEM)

The purpose of a communications system is to exchange data between two entities.

Figure 3.1.Basic
Elements of a Communication System
 Source: entity that generates data; eg. a person who speaks into the phone, or a computer sending data to the
modem.

Transmitter: a transmitter converts the electrical signal into a form that is suitable for transmission
through the physical channel or transmission medium. In general, a transmitter performs the matching of
the message signal to the channel by a process called modulation.
The choice of the type of modulation is based on several factors, such as:

- the amount of bandwidth allocated,

- the type of noise and interference that the signal encounters in transmission over the channel,
- and the electronic devices that are available for signal amplification prior to transmission.
 Transmission System Channel: the communication channel is the physical medium that connects the
transmitter to the receiver. The physical channel may be a pair of wires that carry the electrical signals, or
an optical fiber that carries the information on a modulated light beam or free space at which the
information-bearing signal are electromagnetic waves.
 Receiver: the function of a receiver is to recover the message signal contained in the received signal.
The main operations performed by a receiver are demodulation, filtering and decoding.
 Destination: entity that finally uses the data.
Example: - computer on other end of a receiving modem.
 Information: generated by the source may be in the form of voice, a picture or a plain text. An essential
feature of any source that generates information is that its output is described in probabilistic terms; that is, the
output is not deterministic. A transducer is usually required to convert the output of a source in an electrical
signal that is suitable for transmission.

3.1.1. Analog and Digital Data Transmission

 Data are entries that convey information.


 Signals are electrical encoding (representation) of data.
 Signaling is the act of propagation (spread) of signals through a suitable medium.
The terms analog and digital correspond to continuous and discrete, respectively. These two terms are frequently
used in data communications.
Periodic and Aperiodic Signals

As a matter of fact, both analog and digital signals can take one of the following signal forms

 Periodic Signal
This signal completes a pattern with in a measurable time frame and repeats the pattern over
subsequent identical periods. The process of completion of one full pattern is known as cycle.
 Aperiodic Signal
An Aperiodic signal changes without exhibiting (showing) a pattern or cycle that repeats over time.

Analog signal

Analog data takes on continuous values on some interval. The most familiar examples of analog data are audio,
video, the outputs of many sensors such as temperature and pressure sensors, etc.
In most cases we use periodic analog signals. Associated with this there are some basic terms in the case of
analog signals such as: -

 Peak Amplitude: It describes the absolute value of signal’s highest intensity which is proportional to
the energy it carries.
 Period: It is the amount of time required by a signal to complete one cycle. Basically period is
expressed in seconds
 Frequency: It refers to the number of periods per second.
Figure 3.2. Analog Signal

Frequency is the number of times per second that the wave cycle repeats or oscillates. Period and frequency
have inversely relationship. Shorter wave length produce higher frequencies b/c the waves are closer together
and vice versa. The unit used to measure the frequency is Hertz which means cycles/second. The unit can be
expanded by adding prefixes as follows.

1kilohertz = 1000 Cycles/Second

1 Mégahertz = 1000,000 Cycles/Second

1Gigahertz = 1000, 000,000 Cycles/Second

1 Ter hertz = 1000, 000, 000,000 Cycles/Second

Bandwidth Capacity

Bandwidth is the transmission capacity of a communications channel. In the world of networking, bandwidth
is measured in terms of megabits per second. A medium with a high capacity has a high bandwidth; a medium
with a low capacity has a low bandwidth.

Note: The bandwidth of analog signal is measured in Hertz.

Digital signal

A signal which is discrete with respect to time is called digital signal. Such signal can be modeled using binary
number system as shown in the figure below.

Figure Digital Signal

Note: The bandwidth of digital signals is usually measured in bits per second (BPS). The unit can be expanded
by adding prefixes as follows.

1KiloBPS = 1 KBPS =1000 BPS

1 Mega PS = 1 MBPS=1000,000 BPS


1GigaBPS = 1 GBPS=1000,000,000 BPS

1 Tera BPS = 1TBPS=1000,000,000,000 BPS

Bit Interval and Bit Rate


Most digital signals are aperiodic and thus terms like period or frequency are not appropriate. Two new terms,
bit interval (instead of period) and bit rate (instead of frequency) are used to describe a digital signal. The bit
interval is the time required to send one single bit. The bit rate is the number of bit intervals per second. This
means that the bit rate is the number of bits sent in one second, usually expressed in bits per second (bps).
Similar to period & frequency, bit interval and bit rate have inverse relationship

3.2. Data Transmission

As discussed latter analog signals are used to represent analog data and digital signals are used to represent
digital data. Both of the signals can be transmitted by means of analog signal transmission and digital signal
transmission.

3.2.1. Analog signal Transmission


Analog signal Transmission is a means of transmitting signals without regard to their content. The signal may
be analog data (like voice) or digital data (binary data that passes through a modem) see figure the following
figure.

Figure 3.3 Two ways of transmitting analog information

In either cases, the analog signal will become weaker (attenuate) after a certain distance. To achieve
longer distances, the analog transmission system includes amplifiers that increase the energy of the
signal and the noises. Due to this the signals transmitted by analog transmission are exposed to high
noise.

3.2.2. Digital Signal Transmission


Digital transmission is the transfer of information through a medium in digital form. A digital signal can be
transmitted only for a limited distance before attenuation, noise and other impairments distorts the integrity of
data. To achieve greater distance, repeaters are used. A repeater receives the digital signal, filters out noises,
recovers the pattern of 0s and 1s and retransmits a new signal. As a result digital transmission is more immune
to noise than analog transmission and it is suitable for long distance. The data transmitted in digital transmission
may be analog or digital. The following figure illustrates two methods of sending data from computer A to
computer B. both cases are examples of data communications.

3.2.3. Communication Modes


As discussed in the previous chapter data communication modes refers to the flow of signals from
one node of the network to the other. There are three modes of communication: simplex
(unidirectional), half duplex (bidirectional, but not simultaneously) and full duplex (bidirectional,
simultaneously).
3.2.4. Data Transmission Modes
3.2.4.1. Parallel
3.2.4.2. Serial
3.2.4.2.1. Synchronous
3.2.4.2.2. Asynchronous
3.2.5. Transmission Impairments
Fill the above from B

3.3. Switching Concepts


Switching is a component of a network’s logical topology that determines how connections are
created and how data are handled between nodes. There are three methods for switching: circuit
switching, message switching, and packet switching.
There are three methods for switching: circuit switching, message switching, and packet
switching.

3.3.1. Circuit Switching


In circuit switching, a connection is established between two network nodes before they begin transmitting
data. Bandwidth is dedicated to this connection and remains available until the users terminate
communication between the two nodes. While the nodes remain connected, all data follows the same path
initially selected by the switch. When you place a telephone call, for example, your call typically uses a circuit-
switched connection. Because circuit switching monopolizes its piece of bandwidth while the two stations
remain connected (even when no actual communication is taking place), it can result in a waste of available
resources. Another example of circuit switching occurs when you connect your home PC via modem to your
Internet service provider’s access server. WAN technologies, such as ISDN and T1 service, also use circuit
switching.
A circuit-switching network defines a static path from one point to another; so long as the two points are
connected, all data traveling between those two points will take the same path. Thus, it's unnecessary to
include addressing information in the packet with the data. Because there's only one path, the data can't get
lost.
Phases in Circuit Switching

Transfer of data units through a circuit switched network involves three operational phases

1. Connection Establishment Phase

2. Data Transfer Phase

3. Connection Release Phase

1. Connection Establishment Phase


 The call originating end system transmits a connection request with the destination address to the entry
node

 The entry node sets up a path by cross connecting one of the outgoing trunk circuits in the direction of the
destination end system
 The address information is then transferred to the next node where again a cross connection b/n the
incoming & outgoing trunk is established

 This proses is repeated at each intermediate node & at the exit node which serves the destination end
system

 The exit node sends incoming call indication to the destination end system that returns a call acceptance.
Then, the network confirms establishment of the connections the call originating end system

The network resources allocated for the purpose of setting up the connection are for an exclusive use of the
end systems to transport their data

2. Data Transfer Phase


 Data transfer follows after the connection confirmation is received

Basic features of data transfer services:


 The same connection is used by both the end systems to communicate

 The network nodes cannot store the data units.

– The data rates at the source & destination remains the same

 Address of the destination is specified only once during call set up. All subsequent data units are
transmitted on the path already established

 The nodes do not carry out any kinds of error control

3. Connection Release Phase


 The connection is released at the request of the end systems & after the release, the network resources
which were engaged for setting up the connection are also released

Circuit Switching: Advantage & Limitation

• Advantage:

– Since there is a dedicated transmission channel, it provides good data rate (there is Guaranteed
connection)

– No delay in data flow

• Limitation:

– Since the connection is dedicated, it cannot be used to transmit any other data even if the channel
is free (it is Inefficient)

– Dedicated channel need more bandwidth

– It takes long time to set connection

– No delay, however, busy when connected.


3.3.2. Message switching

Message switching establishes a connection between two devices, transfers the information to the second
device, and then breaks the connection. The information is stored and forwarded from the second device after
a connection between that device and a third device on the path is established. This “store and forward”
routine continues until the message reaches its destination. All information follows the same physical path;
unlike with circuit switching, however, the connection is not continuously maintained. Message switching
requires that each device in the data’s path has sufficient memory and processing power to accept and store
the information before passing it to the next node.

Each message is treated as an independent unit & includes its own destination & source address. Each
complete message is then transmitted from device to device through the internetwork. Each intermediate
device receives the message, stores it, until the next device is ready to receive it & then forwards it to the next
device. As result, this network sometimes is known as store & forward network Example. TELEGRAPH networks

Message switching: Advantage & Disadvantage

Advantage

 It provides efficient traffic management by assigning priorities to the messages to be switched.


 Reduces network traffic congestion because it is able to store message until a communication channel
becomes available
 The network devices share the data channels
 Provides asynchronous communication across time zones
Limitation

 Storing and forwarding introduces delay, hence cannot be used for real time applications like voice and
video.
Intermediate devices require a large storing capacity since it has to store the message unless a free path is
available.

3.3.3. Packet Switching


A third and by far the most popular method for connecting nodes on a network is packet switching. Packet switching
breaks data into packets before they are transported. Packets can travel any path on the network to their destination,
because each packet contains the destination address and sequencing information. Consequently, packets can attempt to
find the fastest circuit available at any instant. They need not follow each other along the same path, nor must they arrive
at their destination in the same sequence as when they left their source.

When packets reach their destination node, the node reassembles them based on their control information. Because of
the time it takes to reassemble the packets into a message, packet switching is not optimal for live audio or video
transmission. Nevertheless, it is a fast and efficient mechanism for transporting typical network data, such as e-mail
messages, spreadsheet files, or even software programs from a server to client. The greatest advantage to packet
switching lies in the fact that it does not waste bandwidth by holding a connection open until a message reaches its
destination, as circuit switching does. And unlike message switching, it does not require devices in the data’s path to
process any information. Ethernet networks and the Internet are the most common examples of packet-switched
networks.

In a packet-switching network there is no direct connection from point A to point B. Rather than a direct
connection, the packet-switching network has a mesh of paths between the two points (see the following
Figure).

Figure A packet-switching network has no set paths for data to travel

There are three different ways in which packets can be addressed:

 Unicast: packet is addressed to a single destination


 Multicast: packet is addressed simultaneously to multiple destinations
 Broadcast: packet is sent simultaneously to all stations on the network
Generally, in packet switching messages are first divided into smaller pieces called packets. Each packet
includes source and destination address information so that individual packets can be routed through the
transmission system.

o A packet contains: information about source and target address, length of data, packet
sequence number, flags to indicate beginning and end, etc.

o Data sent out of sequence

o Small chunks (packets) of data at a time

o Packets passed from node to node between source and destination

o One packet doesn’t necessarily follow the path followed by another packet.

o Data is assembled at the destination

o Typical of computer networks

There are actually two types of packet switching. Connection-oriented (CO), or virtual, circuits, a route across
the network is established and all packets of data follow that route. In CO packet switching, the sender will first
requests a connection to the receiver, and waits for the connection to be established. Once it is established the
virtual connection is left in place whilst the data is transmitted. When data transmission is complete, the
connection is given up. You can think of CO packet switching as being like a telephone call: when you are
talking to somebody on the phone, all information is sent by the same route over the telephone network.
When you put the phone down, the connection is given up. A specific case of CO packet switching is the virtual
private network (VPN). A VPN is a private network that uses public network infrastructure. A series of
encrypted logical connections (or tunnels) are made across the public network, enabling computers in different
parts of the world to communicate as if they were on a private network. It similar to circuit switching

In connectionless (CL) circuits, no pre-determined route exists, and each packet is routed independently. In this
case, the sender simply prepares the packet for transmission, adds the destination address, and sends it onto
the network. The network hardware will then use the address to route the packet in the best way that it can.
You can think of CL packet switching as being like posting a series of letters: the postal service will send each
letter independently, and you cannot be sure that each letter will follow the same physical route. It is Similar to
message switching.

CO packet switching has more ‘overheads’: before transmission can start time must be spent setting up the
virtual connection across the network, and after it has finished more time must be spent closing the
connection. However, once transmission has commenced, bandwidth can be reserved so it is possible to
guarantee higher data rates, which is not possible with CL packet switching. Therefore CO packet switching is
well suited to real-time application such as streaming of video and/or sound. On the other hand, CL packet
switching is simpler, has fewer overheads, and allows multicast and broadcast addressing.

3.4. Introduction to Routing


It was mentioned above that a major difference between CO and CL networks is whether a route is determined
for all packets at once or determined individually for each packet. We will now deal with the subject of how
these routes are determined. One important concept to understand before we begin is that of a routing table.

3.4.1 Routing tables

A routing table is stored in the RAM of a network device such as a bridge, switch or router, and contains
information about where to forward data to, based on its destination address. For example, Figure 1 shows a
simple network consisting of 3 switches and 6 computers. Each switch connects a different sub-network. Each
computer has an address consisting of a network number followed by a computer number (e.g. computer A is
in network number 1 and has computer number 2). The routing table is shown for switch 3, and indicates
where the next destination (or next hop) should be for reaching each address on the network.

For instance, if computer C sends data to computer A, then switch 3 will first look at the destination address (1,
2), and then look up this address in its routing table. It finds that the next hop for this address is port 5 of the
switch, and so sends the data to this port and no other.
Figure 3.X

3.4.2. Routing strategies

The next question is how the data is in the routing table determined. We will now look at some of the common
strategies used to route packets in packet switching networks. We will first survey some of the key
characteristics of such strategies, and then examine some specific routing strategies
Characteristics of routing strategies

The primary function of a packet switching network is to accept packets from a source station and deliver them
to a destination station. To accomplish this a route through the network must be established. Often, more than
one route is possible. Thus, the ‘best’ route must be determined. There are a number of requirements that this
decision should take into account:
 Correctness
 Simplicity
 Robustness
 Stability
 Fairness
 Optimality
 Efficiency

The first two requirements are straightforward: correctness means that the route must lead to the correct
destination; and simplicity means that the algorithm used to make the decision should not be too complex.
Robustness has to do with the ability of the network to cope with network failures and overloads. Ideally, the
network should react to such failures without losing packets and without breaking virtual circuits. Stability
means that the network should not overreact to such failures: the performance of the network should remain
reasonably stable over time. A tradeoff exists between fairness and optimality. The optimal route for one
packet is the shortest route (measured by some performance criterion). However, giving one packet its optimal
route may adversely affect the delivery of other packets. Fairness means that overall most packets should have
a reasonable performance. Finally, any routing strategy involves some overheads and processing to calculate
the best routes. Efficiency means that the benefits of these overheads should outweigh their cost.
The selection of a route is generally based on some performance criterion. The simplest criterion to use is the
smallest number of hops between the source and destination. A hop generally refers to a journey between two
network nodes. A network node could be a computer, router, or other network device. A slightly more
advanced technique is to assign a cost to each link in the network. A shortest path algorithm can then be used
to calculate the lowest cost route. For example, in Figure 2, there are 5 network devices. The weighted edges
between them represent the costs of the connections. To send data from device 1 to device 5, the shortest
path is via devices 3 and 4. However, the shortest number of hops would be via device 3 only. The cost used
could be related to the throughput (i.e. speed) of the link, or related to the current queuing delay on the link.

Figure 2 – A weighted graph illustrating network connectivity

Two key characteristics of the routing decision are when and where it is made. The decision time is determined
by whether we are using a CO network or a CL network. For CL networks the route is established
independently for each packet. For CO networks the route is established once at the time the virtual circuit is
set up. The decision place refers to which node(s) are responsible for the routing decision. The most common
technique is distributed routing, in which each node has the responsibility to forward each packet as it arrives.
For centralized routing all routing decisions are made by a single designated node. The danger of this approach
is that if this node is damaged or lost the operation of the network will cease. In source routing, the routing
path is established by the node that is sending the packet.

Almost all routing strategies will make their routing decisions based upon some information about the state of
the network. The network information source refers to where this information comes from, and the network
information update timing refers to how often this information is updated. Local information means just using
information from outgoing links from the current node. An adjacent information source means any node which
has a direct connection to the current node. The update timing of a routing strategy can be continuous
(updating all the time), periodic (every t seconds), or occur when there is a major load or topology change.

Examples of routing strategies

How that we are familiar with some of the characteristics and elements of routing strategies, we will examine
some specific examples.

3.4.2.1 Fixed routing

In fixed routing, a single, permanent route is established for each source-destination pair in the network. We
say in this case that the routing table of each network device is static, i.e. it will not change once assigned.
These routes can be calculated using a shortest path algorithm based on some cost criterion, or for simple
networks they can be assigned manually by the network administrator. Fixed routing is a simple scheme, and it
works well in a reliable network with a stable load. However, it does not respond to network failures, or
changes in network load (e.g. congestion).

3.4.2.2 Flooding

Another simple routing technique is flooding. This technique requires no network information at all, and works
as follows. A packet is sent by the source to each of its adjacent nodes. At each node, incoming packets are
retransmitted to every outgoing link apart from the one on which it arrived. If/when a duplicate packet arrives
at a node, it is discarded. This identification is made possible by attaching a unique identifier to each packet.

With flooding, all possible routes between the source and the destination are tried. Therefore so long as a path
exists at least one packet will reach the destination. This means that flooding is a highly robust technique, and
is sometimes used to send emergency information. Furthermore, at least one packet will have used the least
cost route. This can make it useful for initializing routing tables with least cost routes. Another property of
flooding is that every node on the network will be visited by a packet. This means that flooding can be used to
propagate important information on the network, such as routing tables.

A major disadvantage of flooding is the high network traffic that it generates. For this reason it is rarely used
on its own, but as described above it can be a useful technique when used in combination with other routing
strategies.

3.4.2.3 Random routing

Random routing has the simplicity and robustness of flooding with far less traffic load. With random routing,
instead of each node forwarding packets to all outgoing links, the node selects only one link for transmission.
This link is chosen at random, excluding the link on which the packet arrived. Often the decision is completely
random, but an refinement of this technique is to apply a probability to each link. This probability could be
based on some performance criterion, such as throughput.

Like flooding, random routing requires the use of no network information. The traffic generated is much
reduced compared to flooding. However, unlike flooding, random routing is not guaranteed to find the
shortest route from the source to the destination.

3.4.2.4 Adaptive routing

In almost all packet switching networks some form of adaptive routing is used. The term adaptive routing
means that the routing decisions that are made change as conditions on the network change. The two
principle factors that can influence changes in routing decisions are failure of a node or a link, and congestion
(if a particular link has a heavy load it is desirable to route packets away from that link).

For adaptive routing to be possible, information about the state of the network must be exchanged among the
nodes. This has a number of disadvantages. First, the routing decision is more complex, thus increasing the
processing overheads at each node. Second, the information that is used may not be up-to-date. To get up-to-
date information means requires continuous exchange of routing information between nodes, thus increasing
network traffic. Therefore there is a tradeoff between quality of information and network traffic overheads.
Finally, it is important that an adaptive strategy does not react to slowly or too quickly to changes. If it reacts
too slowly it will not be useful. But if it reacts too quickly it may result in an oscillation, in which all network
traffic makes the same change of route at the same time.

However, despite these dangers, adaptive routing strategies generally offer real benefits in performance,
hence their popularity. Two examples of adaptive routing strategies are distance-vector routing and link-state
routing.

3.4.2.5 Distance-vector routing

Using the distance vector technique network devices periodically exchange information about their routing
tables. The exchange of information typically takes place every 30 seconds, is two-way and consists of the
entire routing table. The routing table contains a list of destinations, together with the corresponding next hop
and the distance to the destination. The measure of distance is usually simplified so that each hop represents a
distance of 1. Upon receiving the routing table from a neighbouring device, each device will compare the
information it receives with its own routing table and update it if necessary. The distance vector technique is
simple to implement but it has a number of weaknesses. First, it cannot distinguish between fast and slow
connections, and second it takes time to broadcast the entire routing table around the network.

3.4.2.6 Link-state routing

In the link state technique, each network device periodically tests the speed of all of its links. It then
broadcasts this information to the entire network. Each device can therefore construct a graph with weighted
edges that represent the network connectivity and performance (e.g. see Figure 2). The device can then use a
shortest path algorithm such as Dijkstra’s agorithm to compute the best route for a packet to take.

3.5. Introduction to Ethernet & Wireless Networks

3.5.1 The Ethernet

Ethernet has become the most popular way of networking desktop computers and is still very commonly used
today in both small and large network environments.

Standard specifications for Ethernet networks are produced by the Institute of Electronic and Electrical
Engineers (IEEE) in the USA, and there have been a large number over the years. The original Ethernet
standard used a bus topology, transmitted at 10 Mbps, and relied on CSMA/CD to regulate traffic on the main
cable segment. The Ethernet media was passive, which means it required no power source of its own and thus
would not fail unless the media is physically cut or improperly terminated. More recent Ethernet standards
have different specifications.

Packets in Ethernet networks are referred to as frames. The format of an Ethernet frame has remained largely
the same throughout the various standards produced by the IEEE, and is shown below.
ETHERNET FRAME FORMAT

PREAMBL Destination Source


SFD Length Data FCS
E Address Address

7 bytes 1 byte 6 bytes 6 bytes 2 bytes 46-1500 bytes 4 bytes

Each frame begins with a 7-byte preamble. Each byte has the identical pattern 10101010, which is used to help
the receiving computer synchronise with the sender. This is followed by a 1-byte start frame delimiter (SFD),
which has the pattern 10101011. Next are the source and destination addresses, which take up 6 bytes each.
The data can be of variable length (46-1500 bytes), so before the data itself there is a 2-byte field that indicates
the length of the following data field. Finally there is a 4-byte frame check sequence, used for cyclic
redundancy checking. Therefore the minimum and maximum lengths of an Ethernet frame are 72 bytes and
1526 bytes respectively.

Although there have been a number of different standards for the Ethernet architecture over the years, a
number of features have remained the same The table below summarises the general features of Ethernet
LANs.

FEATURE Description

Linear bus

Traditional
topology
Other topologies Star bus

Type of communication Baseband

Access method CSMA/CD

Transfer speeds 10/100/1000 Mbps

Cable type Thicknet/thinnet coaxial or UTP

The first phase of Ethernet standards had a transmission speed of 10Mbps. Three of the most common of
these are known as 10Base2, 10Base5 and 10BaseT. The following table summarises some of the features of
each specification.

ETHERNET STANDARDS
10Base2 10Base5 10BaseT
Topology Bus Bus Star bus
UTP
Cable type Thinnet coaxial Thicknet coaxial
(Cat. 3 or higher)
Simplex/half/full
Half duplex Half duplex Half duplex
duplex
Manchester, Manchester, Manchester,
Data encoding
asynchronous asynchronous asynchronous
Connector BNC DIX or AUI RJ45
Max. segment length 185 metres 500 metres 100 metres

Note that although the 10BaseT standard uses a physical star-bus topology, it still used a logical bus topology.
This combination is sometimes referred to as a “star-shaped bus”. In addition to these three, a number of
standard existed for use with fibre-optic cabling, namely 10BaseFL, 10BaseFB and 10BaseFP.

The next phase of Ethernet standards was known as fast Ethernet, and increased transmission speed up to
100Mbps. Fast Ethernet is probably the most common standard in use today. The Manchester encoding
technique used in the original Ethernet standards is not well suited to high frequency transmission so new
encoding techniques were developed for fast Ethernet networks. Three of the most common fast Ethernet
standards are summarised below, although others do exist (e.g. 100BaseT2).

FAST ETHERNET STANDARDS


100BaseT4 100BaseTX 100BaseFX
Topology Star Bus Star Bus Star Bus
UTP UTP
Cable type Fibre-optic
(Cat. 3 or higher) (Cat. 5 or higher)
Connector RJ45 RJ45 SC, ST or FDDI MIC
Max. segment length 100 metres 100 metres 2000m
Communication type Half duplex Full duplex Full duplex

The most recent phase of Ethernet standards has increased transmission speeds up to 1000Mbps, although
sometimes at the expense of some other features, such as maximum segment length. Because of the
transmission speed, it has become known as Gigabit Ethernet, and the most common standards are
summarised below.

GIGABIT ETHERNET STANDARDS


1000BaseT 1000BaseCX 1000BaseSX 1000BaseLX
Topology Star Bus Star Bus Star Bus Star Bus
UTP Twinax
Cable type (Cat. 5 or (shielded Fibre-optic Fibre-optic
higher) copper wire)
Connector RJ45 HSSC SC SC
Max. segment length 100m 25m 275m 316-550m
Communication type Full duplex Full duplex Full duplex Full duplex

Finally, the IEEE has also published a number of standards for wireless Ethernet networks. The original
standard was known as 802.11, was very slow (around 2Mbps) and was quickly superseded by more efficient
standards. 802.11 now usually refers to the family of standards that followed after this original standard.
WIRELESS ETHERNET STANDARDS
802.11b 802.11a 802.11g
Max. speed 11Mbps 54Mbps 54Mbps
Ave. speed 4.5Mbps 20Mbps 20Mbps
Max. distance
120m 30m 30m
outdoors
Max. distance
60m 12m 20m
indoors
Broadcast
2.4Ghz 5Ghz 2.4Ghz
frequency

The CSMA/CA access method has become the standard access method for use in wireless networking.

3.6. Error Control

You might also like