Computer Network and Internet
Computer Network and Internet
Computer Network and Internet
BACKGROUND
The history of computer networking is complex. It has involved many people from all over the
world over the past 35 years. Presented here is a simplified view of how the Internet evolved. The
processes of invention and commercialization are far more complicated, but it is helpful to look at
the fundamental development.
In the 1940s computers were large electromechanical devices that were prone to failure. In 1947
the invention of a semiconductor transistor opened up many possibilities for making smaller, more
reliable computers. In the 1950s mainframe computers, which were run by punched card
programs, began to be used by large institutions. In the late 1950s the integrated circuit that
combined several, then many, and now millions, of transistors on one small piece of
semiconductor was invented. Through the 1960s mainframes with terminals were commonplace,
and integrated circuits were widely used.
In late 1960s-1970s, smaller computers, called minicomputers came into existence. However,
these minicomputers were still very large by modern standards. In 1977 Apple Computer
Company introduced microcomputer, also known as personal computer. In 1981 IBM introduced
first personal computer. User-friendly Mac, open-architecture IBM PC, and the further micro-
miniaturization of integrated circuits led to widespread use of personal computers in homes and
businesses.
In the mid-1980s users with stand-alone computers started to share files using modems to
connect to other computers. This was referred to as point-to-point, or dial-up communication. This
concept was expanded by the use of computers that were the central point of communication in a
dial-up connection. These computers were called bulletin boards. Users would connect to the
bulletin boards, leave and pick up messages, as well as upload and download files. The
drawback to this type of system was that there was very little direct communication and then only
with those who knew about the bulletin board. Another limitation was that the bulletin board
computer required one modem per connection. If five people connected it would require five
modems connected to five separate phone lines. As the number of people who wanted to use the
system grew, the system was not able to handle the demand.
For example, imagine if 500 people wanted to connect at the same time. Starting in the 1960s
and continuing through the 70s, 80s, and 90s, the Department of Defense (DoD) developed large,
reliable, wide-area networks (WANs) for military and scientific reasons. This technology was
different from the point-to-point communication used in bulletin boards. It allowed multiple
computers to be connected together using many different paths. The network itself would
determine how to move data from one computer to another. Instead of only being able to
communicate with one other computer at a time, many computers could be reached using the
same connection. The DoDs WAN eventually became the Internet
NETWORKING DEVICES
Equipment that connects directly to a network segment is referred to as a device. These devices
are broken up into two classifications. The first classification is end-user devices. End-user
devices include computers, printers, scanners, and other devices that provide services directly to
the user. The second classification is network devices. Network devices include all the devices
that connect the end-user devices together to allow them to communicate.
End-user devices that provide users with a connection to the network are also referred to as
hosts. These devices allow users to share, create, and obtain information. The host devices can
exist without a network, but without the network the host capabilities are greatly reduced.
Host devices are physically connected to the network media using a network interface card (NIC).
They use this connection to perform the tasks of sending e-mails, printing reports, scanning
pictures, or accessing databases. A NIC is a printed circuit board that fits into the expansion slot
of a bus on a computer motherboard, or it can be a peripheral device. It is also called a network
adapter. Laptop or notebook computer NICs are usually the size of a PCMCIA card. Each
individual NIC carries a unique code, called a Media Access Control (MAC) address. This
address is used to control data communication for the host on the network. More about the MAC
address will be covered later. As the name implies, the NIC controls host access to the medium.
There are no standardized symbols for end-user devices in the networking industry. They appear
similar to the real devices to allow for quick recognition.
Network devices provide transport for the data that needs to be transferred between end-user
devices. Network devices provide extension of cable connections, concentration of connections,
conversion of data formats, and management of data transfers. Examples of devices that perform
these functions are repeaters, hubs, bridges, switches, and routers. All of the network devices
mentioned here are covered in depth later in the course. For now, a brief overview of networking
devices will be provided.
Hubs concentrate connections. In other words, they take a group of hosts and allow the network
to see them as a single unit. This is done passively, without any other effect on the data
transmission. Active hubs not only concentrate hosts, but they also regenerate signals.
Bridges convert network transmission data formats as well as perform basic data transmission
management. Bridges, as the name implies, provide connections between LANs. Not only do
bridges connect LANs, but they also perform a check on the data to determine whether it should
cross the bridge or not. This makes each part of the network more efficient.
Workgroup switches add more intelligence to data transfer management. Not only can they
determine whether data should remain on a LAN or not, but they can transfer the data only to the
connection that needs that data. Another difference between a bridge and switch is that a switch
does not convert data transmission formats.
Routers have all the capabilities listed above. Routers can regenerate signals, concentrate
multiple connections, convert data transmission formats, and manage data transfers. They can
also connect to a WAN, which allows them to connect LANs that are separated by great
distances. None of the other devices can provide this type of connection.
TYPES OF NETWORKS
Peer-to-Peer Network
Cline Server Network
Peer-to-Peer Network
In order for data to travel from the source to the destination, each layer of the OSI model at the
source must communicate with its peer layer at the destination. This form of communication is
referred to as peer-to-peer. During this process the protocols of each layer exchange information,
called protocol data units (PDUs). Each layer of communication on the source computer
communicates with a layer-specific PDU, and with its peer layer on the destination computer.
Data packets on a network originate at a source and then travel to a destination. Each layer
depends on the service function of the OSI layer below it. To provide this service, the lower layer
uses encapsulation to put the PDU from the upper layer into its data field; then it adds whatever
headers and trailers the layer needs to perform its function. Next, as the data moves down
through the layers of the OSI model, additional headers and trailers are added. After Layers 7, 6,
and 5 have added their information, Layer 4 adds more information. This grouping of data, the
Layer 4 PDU, is called a segment.
The network layer provides a service to the transport layer, and the transport layer presents data
to the internetwork subsystem. The network layer has the task of moving the data through the
internetwork. It accomplishes this task by encapsulating the data and attaching a header creating
a packet (the Layer 3 PDU). The header contains information required to complete the transfer,
such as source and destination logical addresses.
The data link layer provides a service to the network layer. It encapsulates the network layer
information in a frame (the Layer 2 PDU). The frame header contains information (for example,
physical addresses) required to complete the data link functions. The data link layer provides a
service to the network layer by encapsulating the network layer information in a frame.
Physical layer also provides a service to the data link layer. The physical layer encodes the data
link frame into a pattern of 1 & 0 (bits) for transmission on the medium (usually a wire) at Layer 1.
• Computers
• Network interface cards
• Peripheral devices
• Networking media
• Network devices
LANs make it possible for businesses that use computer technology to locally share files and
printers efficiently, and make internal communications possible. A good example of this
technology is e-mail. They tie data, local communications, and computing equipment together.
• Ethernet
• Token Ring
• FDDI
WANs interconnect LANs, which then provide access to computers or file servers in other
locations. Because WANs connect user networks over a large geographical area, they make it
possible for businesses to communicate across great distances. Using WANs allows computers,
printers and other devices on a LAN to share and be shared with distant locations. WANs provide
instant communications across large geographic areas.
The ability to send an instant message (IM) to someone anywhere in the world provides the same
communication capabilities that used to be only possible if people were in the same physical
office. Collaboration software provides access to real-time information and resources that allows
meetings to be held remotely, instead of in person. Wide-area networking has also created a new
class of workers - telecommuters, people who never have to leave their homes to go to work.
• Modems
• Integrated Services Digital Network (ISDN)
• Digital Subscriber Line (DSL)
• Frame Relay
• US (T) and Europe (E) Carrier Series – T1, E1, T3, E3
A MAN is a network that spans a metropolitan area such as a city or suburban area. A MAN
usually consists of two or more LANs in a common geographic area. For example, a bank with
multiple branches may utilize a MAN. Typically, a service provider is used to connect two or more
LAN sites using private communication lines or optical services.
A MAN can also be created using wireless bridge technology by beaming signals across public
areas. A SAN is a dedicated, high-performance network used to move data between servers and
storage resources. Because it is a separate, dedicated network, it avoids any traffic conflict
between clients and servers.
• Performance – SANs enable concurrent access of disk or tape arrays by two or more
servers at high speeds, providing enhanced system performance.
• Availability – SANs have disaster tolerance built in, because data can be mirrored using
a SAN up to 10 kilometers (km) or 6.2 miles away.
Scalability – Like a LAN/WAN, it can use a variety of technologies. This allows easy relocation of
backup data, operations, file migration, and data replication between systems
A VPN is a private network that is constructed within a public network infrastructure such as the
global Internet. Using VPN, a telecommuter can access network of the company headquarters
through the Internet by building a secure tunnel between the telecommuter’s PC and a VPN
router in the headquarters.
• What speeds for data transmission can be achieved using a particular type of cable? The
speed of bit transmission through the cable is extremely important. The speed of
transmission is affected by the kind of conduit used.
•
• What kind of transmission is being considered? Will the transmissions be digital or will
they be analog-based? Digital or baseband transmission and analog-based or broadband
transmission are the two choices.
•
• How far can a signal travel through a particular type of cable before attenuation of that
signal becomes a concern? In other words, will the signal become so degraded that the
recipient device might not be able to accurately receive and interpret the signal by the
time the signal reaches that device? The distance the signal travels through the cable
directly affects attenuation of the signal. Degradation of the signal is directly related to the
distance the signal travels and the type of cable used.
• 10BASE-T
• 10BASE5
• 10BASE2
10BASE-T refers to the speed of transmission at 10 Mbps. The type of transmission is baseband,
or digitally interpreted. The T stands for twisted pair.10BASE5 refers to the speed of transmission
at 10 Mbps. The type of transmission is baseband, or digitally interpreted. The 5 represents the
capability of the cable to allow the signal to travel for approximately 500 meters before
attenuation could disrupt the ability of the receiver to appropriately interpret the signal being
received. 10BASE5 is often referred to as Thicknet. Thicknet is actually a type of network, while
10BASE5 is the cabling used in that network.
10BASE2 refers to the speed of transmission at 10 Mbps. The type of transmission is baseband,
or digitally interpreted. The 2, in 10BASE2, represents the capability of the cable to allow the
signal to travel for approximately 200 meters, before attenuation could disrupt the ability of the
receiver to appropriately interpret the signal being received. 10BASE2 is often referred to as
Thinnet. Thinnet is actually a type of network, while 10BASE2 is the cabling used in that network.
Coaxial Cable
Coaxial cable consists of a hollow outer cylindrical conductor that surrounds a single inner wire
made of two conducting elements. One of these elements, located in the center of the cable, is a
copper conductor. Surrounding the copper conductor is a layer of flexible insulation. Over this
insulating material is a woven copper braid or metallic foil that acts as the second wire in the
circuit and as a shield for the inner conductor. This second layer, or shield reduces the amount of
outside electro-magnetic interference. Covering this shield is the cable jacket.
For LANs, coaxial cable offers several advantages. It can be run longer distances than shielded
twisted pair, STP, and unshielded twisted pair, UTP, cable without the need for repeaters.
Repeaters regenerate the signals in a network so that they can cover greater distances. Coaxial
cable is less expensive than fiber-optic cable, and the technology is well known. It has been used
for many years for many types of data communication, including cable television.
When working with cable, it is important to consider its size. As the thickness of the cable
increases, so does the difficulty in working with it. Remember that cable must be pulled through
existing conduits and troughs that are limited in size. Coaxial cable comes in a variety of sizes.
The largest diameter was specified for use as Ethernet backbone cable, because it has a greater
transmission length and noise rejection characteristics. This type of coaxial cable is frequently
referred to as thicknet. As its nickname suggests, this type of cable can be too rigid to install
easily in some situations. Generally, the more difficult the network media is to install, the more
expensive it is to install. Coaxial cable is more expensive to install than twisted-pair cable.
Thicknet cable is almost never used anymore, except for special purpose installations.
In the past, ‘thinnet’ coaxial cable with an outside diameter of only 0.35 cm was used in Ethernet
networks. It was especially useful for cable installations that required the cable to make many
twists and turns. Since thinnet was easier to install, it was also cheaper to install. This led some
people to refer to it as cheapernet. The outer copper or metallic braid in coaxial cable comprises
half the electric circuit and special care must be taken to ensure a solid electrical connection at
both ends resulting in proper grounding. Poor shield connection is one of the biggest sources of
connection problems in the installation of coaxial cable. Connection problems result in electrical
noise that interferes with signal transmittal on the networking media.
For this reason thinnet is no longer commonly used nor supported by latest standards (100 Mbps
and higher) for Ethernet networks.
Shielded twisted-pair cable (STP) combines the techniques of shielding, cancellation, and twisting
of wires. Each pair of wires is wrapped in metallic foil. The four pairs of wires are wrapped in an
overall metallic braid or foil. It is usually 150-Ohm cable. As specified for use in Ethernet network
installations, STP reduces electrical noise within the cable such as pair to pair coupling and
crosstalk. STP also reduces electronic noise from outside the cable, for example electromagnetic
interference (EMI) and radio frequency interference (RFI). Shielded twisted-pair cable shares
many of the advantages and disadvantages of unshielded twisted-pair cable (UTP). STP affords
greater protection from all types of external interference, but is more expensive and difficult to
install than UTP.
A new hybrid of UTP with traditional STP is Screened UTP (ScTP), also known as Foil Twisted
Pair (FTP). ScTP is essentially UTP wrapped in a metallic foil shield. It is usually 100-Ohm 120-
Ohm cable.
The metallic shielding materials in STP and ScTP need to be grounded at both ends. If
improperly grounded or if there are any discontinuities in the entire length of the shielding
material, STP and ScTP become susceptible to major noise problems. They are susceptible
because they allow the shield to act like an antenna picking up unwanted signals. However, this
effect works both ways. Not only does the shield prevent incoming electromagnetic waves from
causing noise on data wires, but it also minimizes the outgoing radiated electromagnetic waves.
These waves could cause noise in other devices. STP and ScTP cable cannot be run as far as
other networking media, such as coaxial cable or optical fiber, without the signal being repeated.
More insulation and shielding combine to considerably increase the size, weight, and cost of the
cable. The shielding materials make terminations difficult and susceptible to poor workmanship.
However, STP and ScTP still have a role, especially in Europe.
Unshielded twisted-pair cable (UTP) is a four-pair wire medium used in a variety of networks.
Each of the 8 individual copper wires in the UTP cable is covered by insulating material. In
addition, each pair of wires is twisted around each other. This type of cable relies solely on the
cancellation effect produced by the twisted wire pairs, to limit signal degradation caused by EMI
and RFI. To further reduce crosstalk between the pairs in UTP cable, the number of twists in the
wire pairs varies. Like STP cable, UTP cable must follow precise specifications as to how many
twists or braids are permitted per foot of cable.
TIA/EIA-568-A contains specifications governing cable performance. It calls for running two
cables, one for voice and one for data, to each outlet. Of the two cables, the one for voice must
be four-pair UTP. CAT 5 is most frequently recommended and implemented in installations today.
Unshielded twisted-pair cable has many advantages. It is easy to install and is less expensive
than other types of networking media. In fact, UTP costs less per meter than any other type of
LAN cabling. However, the real advantage is the size. Since it has such a small external
diameter, UTP does not fill up wiring ducts as rapidly as other types of cable. This can be an
extremely important factor to consider, particularly when installing a network in an older building.
In addition, when UTP cable is installed using an RJ-45 connector, potential sources of network
noise are greatly reduced and a good solid connection is practically guaranteed. There are
disadvantages in using twisted-pair cabling.
UTP cable is more prone to electrical noise and interference than other types of networking
media, & distance between signal boosts is shorter for UTP than it is for coaxial and fiber optic
cables. UTP was once considered slower at transmitting data than other types of cable. This is no
longer true. In fact, today, UTP is considered the fastest copper-based media.
When communication occurs, the signal that is transmitted by the source needs to be understood
by the destination. This is true from both a software and physical perspective. The transmitted
signal needs to be properly received by the circuit connection designed to receive signals. The
transmit pin of the source needs to ultimately connect to the receiving pin of the destination. The
following are the types of cable connections used between internetwork devices.
A LAN switch is connected to a computer. The cable that connects from the switch port to the
computer NIC port is called a straight-through cable.
The cable that connects from one switch port to another switch port is called a crossover cable. In
Figure, the cable that connects the RJ-45 adapter on the com port of the computer to the console
port of the router or switch is called a rollover cable.
The cables are defined by the type of connections, or pinouts, from one end to the other end of
the cable. See images two, four, and six. A technician can compare both ends of the same cable
by placing them next to each other, provided the cable has not yet been placed in a wall. The
technician observes the colors of the two RJ-45 connections by placing both ends with the clip
placed into the hand and the top of both ends of the cable pointing away from the technician. A
straight through cable should have both ends with identical color patterns. While comparing the
ends of a cross-over cable, the color of pins #1 and #2 will appear on the other end at pins #3
and #6, and vice-versa. This occurs because the transmit and receive pins are in different
locations. On a rollover cable, the color combination from left to right on one end should be
exactly opposite to the color combination on the other end.
The light used in optical fiber networks is one type of electromagnetic energy. When an electric
charge moves back and forth, or accelerates, a type of energy called electromagnetic energy is
produced. This energy in the form of waves can travel through a vacuum, the air, and through
some materials like glass. An important property of any energy wave is the wavelength.
Radio, microwaves, radar, visible light, x-rays, and gamma rays seem to be very different things.
However, they are all types of electromagnetic energy. If all the types of electromagnetic waves
are arranged in order from the longest wavelength down to the shortest wavelength, a continuum
called the electromagnetic spectrum is created.
The wavelength of an electromagnetic wave is determined by how frequently the electric charge
that generates the wave moves back and forth. If the charge moves back and forth slowly, the
wavelength it generates is a long wavelength. Visualize the movement of the electric charge as
like that of a stick in a pool of water. If the stick is moved back and forth slowly, it will generate
ripples in the water with a long wavelength between the tops of the ripples. If the stick is moved
back and forth more rapidly, the ripples will have a shorter wavelength. Because electromagnetic
waves are all generated in the same way, they share many of the same properties. They all travel
at rate of 300,000 kilometers/s (186,283 miles/s) through a vacuum.
Human eyes were designed to only sense electromagnetic energy with wavelengths between 700
nanometers and 400 nanometers (nm). A nanometer is one billionth of a meter (0.000000001
meter) in length. Electromagnetic energy with wavelengths between 700 and 400 nm is called
visible light. The longer wavelengths of light that are around 700 nm are seen as the color red.
The shortest wavelengths that are around 400 nm appear as the color violet. This part of the
electromagnetic spectrum is seen as the colors in a rainbow.
Wavelengths that are not visible to the human eye are used to transmit data over optical fiber.
These wavelengths are slightly longer than red light and are called infrared light. Infrared
light is used in TV remote controls. The wavelength of the light in optical fiber is either
850 nm, 1310 nm, or 1550 nm. These wavelengths were selected as they travel through
optical fiber better than other wavelengths.
In order to use a computer as a reliable means of obtaining information, such as accessing Web-
based curriculum, it must be in good working order. To keep a PC in working order will require
occasional troubleshooting of simple problems with computer hardware and software. Therefore it
is necessary to be able to recognize names and purposes of the following PC components:
INTERNET
The Internet is a valuable resource and connection to it is essential for business, industry and
education. Building a network that will connect to the Internet requires careful planning. Even for
the individual user some planning and decisions are necessary. The computer itself must be
considered, as well as the device itself that makes the connection to the local-area network
(LAN), such as the network interface card or modem. The correct protocol must be configured so
that computer can connect to the Internet. Proper selection of a web browser is important.
One common configuration of a LAN is an Intranet. Intranet Web servers differ from public Web
servers in that the public must have the proper permissions and passwords to access the Intranet
of an organization. Intranets are designed to permit access by users who have access privileges
to the internal LAN of the organization. Within an Intranet, Web servers are installed in the
network. Browser technology is used as the common front end to access information such as
financial data or graphical, text-based data stored on those servers.
Extranets refer to applications and services that are Intranet based, and use extended, secure
access to external users or enterprises. This access is usually accomplished through passwords,
user IDs, and other application-level security. Therefore, an Extranet is the extension of two or
more Intranet strategies with a secure interaction between participant enterprises and their
respective intranets.
The Internet is the largest data network on earth. The Internet consists of a multitude of
interconnected networks both large and small. At the edge of this giant network is the individual
consumer computer. Connection to the Internet can be broken down into the physical connection,
the logical connection, and the application.
The logical connection uses standards called protocols. A protocol is a formal description of a set
of rules and conventions that govern how devices on a network communicate. Connections to the
Internet may use multiple protocols. The Transmission Control Protocol/Internet Protocol
(TCP/IP) suite is the primary protocol used on the Internet. TCP/IP is a suite of protocols that
work together to transmit data.
The application that interprets the data and displays the information in an understandable form is
the last part of the connection. Applications work with protocols to send and receive data across
the Internet. A web browser displays Hypertext Markup Language (HTML) as a web page. File
Transfer Protocol (FTP) is used to download files and programs from the Internet. Web browsers
also use proprietary plug-in applications to display special data types such as movies or flash
animations.
This is an introductory view of the Internet, and it may seem an overly simple process. As the
topic is explored in greater depth, it will become apparent that sending data across the Internet is
a complicated task.
• Printed circuit board (PCB) – A thin plate on which chips or integrated circuits and other
electronic components are placed.
• CD-ROM drive:Compact disk read-only memory drive, which is a device that can read
information from a CD-ROM.
• Central processing unit (CPU)-The brains of the computer where calculations take place.
• Floppy disk drive – A disk drive that can read and write to floppy disks.
• Hard disk drive – The device that reads and writes data on a hard disk.
• Microprocessor – A silicon chip that contains a CPU.
• Motherboard – The main circuit board of a microcomputer
• Bus-A collection of wires by which data is transmitted from 1 part of computer to another.
• Random-access memory (RAM) – Also known as Read-Write memory, new data can be
written to it and stored data can be read from it. RAM requires electrical power to
maintain data storage. If the computer is off or loses power, all data stored in RAM is lost.
• Read-only memory (ROM) – Computer memory on which data has been prerecorded.
Once data has been written onto a ROM, it cannot be removed and can only be read.
• System unit – The main part of a PC, which includes the chassis, microprocessor, main
memory, bus, and ports. The system unit does not include the keyboard, monitor, or any
external devices connected to the computer.
• Expansion slot – A socket on the motherboard where a circuit board can be inserted to
add new capabilities to the computer.
• Power supply – The component that supplies power to a computer.
Backplane Components
• Backplane – The large circuit board that contains sockets for expansion cards.
• Network interface card (NIC) – An expansion board inserted into a computer so that the
computer can be connected to a network.
• Video card – A board that plugs into a PC to give it display capabilities.
• Audio card-An expansion board that enables a computer to manipulate & output sounds.
• Parallel port – An interface capable of transferring more than one bit simultaneously that
is used to connect external devices such as printers.
• Serial port – An interface that can be used for serial communication, in which only 1 bit is
transmitted at a time.
• Mouse port – A port designed for connecting a mouse to a PC.
• Power cord – A cord used to connect an electrical device to an electrical outlet that
provides power to device.
Think of the internal components of a PC as a network of devices, which are all attached to the
system bus. In a sense, a PC is a small computer network.
IP ADDRESS
The 32-bit binary addresses used on the Internet are referred to as Internet Protocol (IP)
addresses. Relationship between IP addresses &network masks will be addressed in this section.
When IP addresses are assigned to computers, some of the bits on the left side of the 32-bit IP
number represent a network. The number of bits designated depends on the address class. The
bits left over in the 32-bit IP address identify a particular computer on the network. A computer is
referred to as the host. The IP address of a computer consists of a network and a host part that
represents a particular computer on a particular network.
To inform a computer how the 32-bit IP address has been split, a second 32-bit number called a
subnetwork mask is used. This mask is a guide that indicates how the IP address should be
interpreted by identifying how many of the bits are used to identify the network of the computer.
The subnetwork mask sequentially fills in the 1s from the left side of the mask. A subnet mask will
always be all 1s until the network address is identified and then be all 0s from there to the right
most bit of the mask. The bits in the subnet mask that are 0 identify the computer or host on that
network. Some examples of subnet masks are:
In the first example, the first eight bits from the left represent the network portion of the address,
and the last 24 bits represent the host portion of the address. In the second example the first 16
bits represent the network portion of the address, and the last 16 bits represent the host portion of
the address.
00001010.00100010.00010111.10000110
Performing a Boolean AND of the IP address 10.34.23.134 and the subnet mask 255.0.0.0
produces the network address of this host:
00001010.00100010.00010111.10000110
11111111.00000000.00000000.00000000
00001010.00000000.00000000.00000000
Converting the result to dotted decimal, 10.0.0.0 is the network portion of the IP address, when
using the 255.0.0.0 mask.
Performing a Boolean AND of the IP address 10.34.23.134 and subnet mask 255.255.0.0
produces the network address of this host:
00001010.00100010.00010111.10000110
11111111.11111111.00000000.00000000
00001010.00100010.00000000.00000000
Converting the result to dotted decimal, 10.34.0.0 is the network portion of the IP address, when
using the 255.255.0.0 mask.
This is a brief illustration of the effect that a network mask has on an IP address. The importance
of masking will become much clearer as more work with IP addresses is done. For right now it is
only important that the concept of the mask is understood.
Data networks developed as result of business applications that were written for microcomputers.
At that time microcomputers were not connected as mainframe computer terminals were, so there
was no efficient way of sharing data among multiple microcomputers. It became apparent that
sharing data through the use of floppy disks was not an efficient or cost-effective manner in which
to operate businesses. Sneakernet created multiple copies of the data. Each time a file was
modified it would have to be shared again with all other people who needed that file. If two people
modified the file and then tried to share it, one of the sets of changes would be lost. Businesses
needed a solution that would successfully address the following three problems:
Businesses realized that networking technology could increase productivity while saving money.
Networks were added and expanded almost as rapidly as new network technologies and
products were introduced. In the early 1980s networking saw a tremendous expansion, even
though the early development of networking was disorganized.
In the mid-1980s, the network technologies that had emerged had been created with a variety of
different hardware and software implementations. Each company that created network hardware
and software used its own company standards. These individual standards were developed
because of competition with other companies. Consequently, many of the new network
technologies were incompatible with each other. It became increasingly difficult for networks that
used different specifications to communicate with each other. This often required the old network
equipment to be removed to implement the new equipment.
One early solution was the creation of local-area network (LAN) standards. Because LAN
standards provided an open set of guidelines for creating network hardware and software, the
equipment from different companies could then become compatible. This allowed for stability in
LAN implementation. In a LAN system, each department of the company is a kind of electronic
island. As the use of computers in businesses grew, it soon became obvious that even LANs
were not sufficient.
What was needed was a way for information to move efficiently and quickly, not only within a
company, but also from one business to another. The solution was the creation of metropolitan-
area networks (MANs) and wide-area networks (WANs). Because WANs could connect user
networks over large geographic areas, it was possible for businesses to communicate with each
other across great distances. Figure summarizes the relative sizes of LANs and WANs.
The U.S. Department of Defense (DoD) created the TCP/IP reference model because it wanted a
network that could survive any conditions. To illustrate further, imagine a world, crossed by
multiple cable runs, wires, microwaves, optical fibers, and satellite links.
Then imagine a need for data to be transmitted without regard for the condition of any particular
node or network. The DoD required reliable data transmission to any destination on the network
under any circumstance. The creation of the TCP/IP model helped to solve this difficult design
problem. TCP/IP model has since become the standard on which the Internet is based.
In reading about the layers of the TCP/IP model layers, keep in mind the original intent of the
Internet. Remembering the intent will help reduce confusion. The TCP/IP model has four layers:
the application layer, transport layer, Internet layer, and the network access layer. Some of the
layers in the TCP/IP model have the same name as layers in the OSI model. It is critical not to
confuse the layer functions of the two models because the layers include different functions in
each model.
The present version of TCP/IP was standardized in September of 1981. As shown in Figure, IPv4
addresses are 32 bits long, written in dotted decimal, and separated by periods. IPv6 addresses
are 128 bits long, written in hexadecimal, and separated by colons. Colons separate 16-bit fields.
Leading zeros can be omitted in each field as can be seen in the Figure where the field :0003: is
written. In 1992 the standardization of a new generation of IP, often called IPng, was supported
by the Internet Engineering Task Force (IETF). IPng is now known as IPv6. IPv6 has not gained
wide implementation, but it has been released by most vendors of networking equipment and will
eventually become the dominant standard
IP Addressing
For any two systems to communicate, they must be able to identify and locate each other. While
these addresses in Figure are not actual network addresses, they represent and show the
concept of address grouping. This uses the A or B to identify the network and number sequence
to identify the individual host.
A computer may be connected to more than one network. In this situation, the system must be
given more than one address. Each address will identify the connection of the computer to a
different network. A device is not said to have an address, but that each of the connection points,
or interfaces, on that device has an address to a network. This will allow other computers to
locate the device on that particular network. The combination of letter (network address) and the
number (host address) create a unique address for each device on the network. Each computer
in a TCP/IP network must be given a unique identifier, or IP address. This address, operating at
Layer 3, allows one computer to locate another computer on a network. All computers also have
a unique physical address, known as a MAC address. These are assigned by the manufacturer of
the network interface card. MAC addresses operate at Layer 2 of the OSI model.
An IP address is a 32-bit sequence of 1s and 0s. Figure shows a sample 32-bit number. To make
the IP address easier to use, the address is usually written as four decimal numbers separated by
periods. For example, an IP address of one computer is 192.168.1.2. Another computer might
have the address 128.10.2.1. This way of writing the address is called the dotted decimal format.
In this notation, each IP address is written as four parts separated by periods, or dots. Each part
of the address is called an octet because it is made up of eight binary digits. For example, the IP
address 192.168.1.8 would be 11000000.10101000.00000001.00001000 in binary notation. The
dotted decimal notation is an easier method to understand than the binary ones and zeros
method. This dotted decimal notation also prevents a large number of transposition errors that
would result if only the binary numbers were used.
Using dotted decimal allows number patterns to be more easily understood. Both the binary and
decimal numbers in Figure represent the same values, but it is easier to see in dotted decimal
notation. This is one of the common problems found in working directly with binary number. The
long strings of repeated ones and zeros make transposition and omission errors more likely.
It is easy to see the relationship between the numbers 192.168.1.8 and 192.168.1.9, where
11000000.10101000.00000001.00001000 and 11000000.10101000.00000001.00001001 are not
as easy to recognize. Looking at the binary, it is almost impossible to see that they are
consecutive numbers.
Subnetting
Subnetting is another method of managing IP addresses. This method of dividing full network
address classes into smaller pieces has prevented complete IP address exhaustion. It is
impossible to cover TCP/IP without mentioning subnetting. As a system administrator it is
important to understand subnetting as a means of dividing and identifying separate networks
throughout the LAN.
It is not always necessary to subnet a small network. However, for large or extremely large
networks, subnetting is required. Subnetting a network means to use the subnet mask to divide
the network and break a large network up into smaller, more efficient and manageable segments,
or subnets. An example would be the U.S. telephone system which is broken into area codes,
exchange codes, and local numbers.
The system administrator must resolve these issues when adding and expanding the network. It
is important to know how many subnets or networks are needed and how many hosts will be
needed on each network. With subnetting, the network is not limited to the default Class A, B, or
C network masks and there is more flexibility in the network design.
Subnet addresses include the network portion, plus a subnet field and a host field. The subnet
field and the host field are created from the original host portion for the entire network. The ability
to decide how to divide the original host portion into the new subnet and host fields provides
addressing flexibility for the network administrator.
To create a subnet address, a network administrator borrows bits from the host field and
designates them as the subnet field. The minimum number of bits that can be borrowed is two.
When creating a subnet, where only one bit was borrowed the network number would be the .0
network. The broadcast number would then be the .255 network. The maximum number of bits
that can be borrowed be any number that leaves at least two bits remaining, for the host number.
Network Protocols
Protocol suites are collections of protocols that enable network communication from one host
through the network to another host. A protocol is a formal description of a set of rules and
conventions that govern a particular aspect of how devices on a network communicate. Protocols
determine the format, timing, sequencing, and error control in data communication. Without
protocols, the computer cannot make or rebuild the stream of incoming bits from another
computer into the original format.
Protocols control all aspects of data communication, which include the following:
These network rules are created and maintained by many different organization and committees.
Included in these groups are the Institute of Electrical and Electronic Engineers (IEEE), American
National Standards Institute (ANSI), Telecommunications Industry Association (TIA), Electronic
Industries Alliance (EIA) and the International Telecommunications Union (ITU), formerly known
as the Comité Consultatif International Téléphonique et Télégraphique (CCITT).
OSI Model
The early development of networks was disorganized in many ways. The early 1980s saw
tremendous increases in number and size of networks. As companies realized the advantages of
using networking technology, networks were added or expanded almost as rapidly as new
network technologies were introduced.
By the mid-1980s, these companies began to experience problems from the rapid expansion.
Just as people who do not speak the same language have difficulty communicating with each
other, it was difficult for networks that used different specifications and implementations to
exchange information. The same problem occurred with the companies that developed private or
proprietary networking technologies.
Proprietary means that one or a small group of companies controls all usage of the technology.
Networking technologies strictly following proprietary rules could not communicate with
technologies that followed different proprietary rules.
The Open System Interconnection (OSI) reference model released in 1984 was the descriptive
network model that the ISO created. It provided vendors with a set of standards that ensured
greater compatibility and interoperability among various network technologies produced by
companies around the world.
OSI Layers
The OSI reference model has become the primary model for network communications. Although
there are other models in existence, most network vendors relate their products to the OSI
reference model. This is especially true when they want to educate users on the use of their
products. It is considered the best tool available for teaching people about sending and receiving
data on a network.
The OSI reference model is a framework that is used to understand how information travels
throughout a network. The OSI reference model explains how packets travel through the various
layers to another device on a network, even if the sender and destination have different types of
network media.
In the OSI reference model, there are seven numbered layers, each of which illustrates a
particular network function. Dividing network into seven layers provides the following advantages:
TCP/IP Model
The historical and technical standard of the Internet is the TCP/IP model. The U.S. Department of
Defense (DoD) created the TCP/IP reference model, because it wanted to design a network that
could survive any conditions, including a nuclear war. In a world connected by different types of
communication media such as copper wires, microwaves, optical fibers and satellite links, the
DoD wanted transmission of packets every time and under any conditions. This very difficult
design problem brought about the creation of the TCP/IP model.
Unlike the proprietary networking technologies mentioned earlier, TCP/IP was developed as an
open standard. This meant that anyone was free to use TCP/IP. This helped speed up the
development of TCP/IP as a standard. The TCP/IP model has the following four layers:
• Application layer
• Presentation layer
• Session layer
• Transport layer
• Data link layer
• Physical layer
• Internet layer
• Network access layer
Although some of the layers in the TCP/IP model have the same name as layers in the OSI
model, the layers of the two models do not correspond exactly. Most notably, the application layer
has different functions in each model. Designers of TCP/IP felt that application layer should
include OSI session and presentation layer details. They created an application,handles issues of
representation, encoding, and dialog control.
The transport layer deals with the quality of service issues of reliability, flow control, and error
correction. One of its protocols, the transmission control protocol (TCP), provides excellent and
flexible ways to create reliable, well-flowing, low-error network communications.
The purpose of the Internet layer is to divide TCP segments into packets and send them from any
network. The packets arrive at the destination network independent of the path they took to get
there. The specific protocol that governs this layer is called the Internet Protocol (IP). Best path
determination and packet switching occur at this layer.
The relationship between IP and TCP is an important one. IP can be thought to point the way for
the packets, while TCP provides a reliable transport.
The name of the network access layer is very broad and somewhat confusing. It is also known as
the host-to-network layer. This layer is concerned with all of the components, both physical and
logical, that are required to make a physical link. It includes the networking technology details,
including all the details in the OSI physical and data link layers.
Some of the most commonly used application layer protocols include the following:
The primary protocol of the Internet layer is: Internet Protocol (IP)
The network access layer refers to any particular technology used on a specific network.
Regardless of which network application services are provided and which transport protocol is
used, there is only one Internet protocol, IP. This is a deliberate design decision. IP serves as a
universal protocol that allows any computer anywhere to communicate at any time.
A comparison of the OSI model and the TCP/IP models will point out some similarities and
differences. Similarities include:
Differences include:
• TCP/IP combines the presentation and session layer issues into its application layer.
• TCP/IP combines the OSI data link and physical layers into the network access layer.
• TCP/IP appears simpler because it has fewer layers.
• TCP/IP protocols are the standards around which the Internet developed, so the TCP/IP
model gains credibility just because of its protocols. In contrast, networks are not usually
built on the OSI protocol, even though the OSI model is used as a guide.
Although TCP/IP protocols are the standards with which the Internet has grown, this curriculum
will use the OSI model for the following reasons:
Networking professionals differ in their opinions on which model to use. Due to the nature of the
industry it is necessary to become familiar with both. Both the OSI and TCP/IP models will be
referred to throughout the curriculum.
Remember that there is a difference between a model and an actual protocol that is used in
networking. The OSI model will be used to describe TCP/IP protocols.
Application Layer
The application layer of the TCP/IP model handles high-level protocols, issues of representation,
encoding, and dialog control. The TCP/IP protocol suite combines all application related issues
into one layer and assures this data is properly packaged before passing it on to the next layer.
TCP/IP includes not only Internet and transport layer specifications, such as IP and TCP, but also
specifications for common applications. TCP/IP has protocols to support file transfer, e-mail, &
remote login, in addition to following applications:
• File Transfer Protocol (FTP): FTP is reliable, connection-oriented service that uses TCP
to transfer files between systems that support FTP. It supports bi-directional binary file
and ASCII file transfers.
• Trivial File Transfer Protocol (TFTP) – TFTP is a connectionless service that uses the
User Datagram Protocol (UDP). TFTP is used on the router to transfer configuration files
and Cisco IOS images, and to transfer files between systems that support TFTP. It is
useful in some LANs because it operates faster than FTP in a stable environment.
• Network File System (NFS) – NFS is a distributed file system protocol suite developed
by Sun Microsystems that allows file access to a remote storage device such as hard
disk across a network.
• Simple Mail Transfer Protocol (SMTP) – SMTP administers the transmission of e-mail
over computer networks. It does not provide support for transmission of data other than
plaintext.
• Terminal emulation (Telnet) – Telnet provides the capability to remotely access another
computer. It enables a user to log in to an Internet host and execute commands. A Telnet
client is referred to as a local host. A Telnet server is referred to as a remote host.
• Simple Network Management Protocol (SNMP): SNMP is a protocol that provides a
way to monitor & control network devices, and to manage configurations, statistics
collection, performance, and security.
Domain Name System (DNS) – DNS is a system used on the Internet for translating names of
domains and their publicly advertised network nodes into IP addresses.
The transport layer provides transport services from the source host to the destination host. The
transport layer constitutes a logical connection between the endpoints of the network, the sending
host and the receiving host. Transport protocols segment and reassemble upper-layer
applications into the same data stream between endpoints. The transport layer data stream
provides end-to-end transport services.
The Internet is often represented by a cloud. The transport layer sends data packets from the
sending source to the receiving destination through the cloud. End-to-end control, provided by
sliding windows and reliability in sequencing numbers and acknowledgments, is the primary duty
of the transport layer when using TCP. The transport layer also defines end-to-end connectivity
between host applications. Transport services include all the following services:
TCP only
The Internet is often represented by a cloud. The transport layer sends data packets from the
sending source to the receiving destination through the cloud. The cloud deals with issues such
as “Which of several paths is best for a given route?”
The purpose of the Internet layer is to select the best path through the network for packets to
travel. The main protocol that functions at this layer is the Internet Protocol (IP). Best path
determination and packet switching occur at this layer.
The network access layer is also called the host-to-network layer. The network access layer is the
layer that is concerned with all of the issues that an IP packet requires to actually make a physical
link to the network media. It includes the LAN and WAN technology details, and all the details
contained in the OSI physical and data-link layers.
Drivers for software applications, modem cards and other devices operate at the network access
layer. The network access layer defines the procedures for interfacing with the network hardware
and accessing the transmission medium. Modem protocol standards such as Serial Line Internet
Protocol (SLIP) and Point-to-Point Protocol (PPP) provide network access through a modem
connection. Because of an intricate interplay of hardware, software, and transmission-medium
specifications, there are many protocols operating at this layer. This can lead to confusion for
users. Most of the recognizable protocols operate at the transport and Internet layers of the
TCP/IP model.
Network access layer functions include mapping IP addresses to physical hardware addresses
and encapsulation of IP packets into frames. Based upon the hardware type and the network
interface, the network access layer will define the connection with the physical network media.
A good example of network access layer configuration would be to set up a Windows system
using a third party NIC. Depending on the version of Windows, the NIC would automatically be
detected by the operating system and then the proper drivers would be installed. If this were an
older version of Windows, the user would have to specify network card driver. Card manufacturer
supplies these drivers on disks or CD-ROMs
Internet Architecture
While the Internet is complex, there are some basic ideas in its operation. In this section the basic
architecture of the Internet will be examined. The Internet is a deceptively simple idea, that when
repeated on a large scale, enables nearly instantaneous worldwide data communications
between anyone, anywhere, at any time.
LANs are smaller networks limited in geographic area. Many LANs connected together allow the
Internet to function. But LANs have limitations in scale. Although there have been technological
advances to improve the speed of communications, such as Metro Optical, Gigabit, and 10-
Gigabit Ethernet, distance is still a problem.
Focusing on the communication between the source and destination computer and intermediate
computers at the application layer is one way to get an overview of the Internet architecture.
Placing identical instances of an application on all the computers in the network could ease the
delivery of messages across the large network. However, this does not scale well. For new
software to function properly, it would require new applications installed on every computer in the
network. For new hardware to function properly, it would require modifying the software. Any
failure of an intermediate computer or the application of the computer would cause a break in the
chain of the messages that are passed.
The Internet uses the principle of network layer interconnection. Using the OSI model as an
example, the goal is to build the functionality of the network in independent modules. This allows
a diversity of LAN technologies at Layers 1 and 2 and a diversity of applications functioning at
Layers 5, 6, and 7. The OSI model provides a mechanism where the details of the lower and the
upper layers are separated. This allows intermediate networking devices to “relay” traffic without
having to bother with the details of the LAN.
It must be able to adjust to dynamic conditions on the network. And internetworks must be cost-
effective. Internetworks must be designed to permit anytime, anywhere, data communications to
anyone.
Figure summarizes the connection of one physical network to another through a special purpose
computer called a router. These networks are described as directly connected to the router. The
router is needed to handle any path decisions required for the two networks to communicate.
Many routers are needed to handle large volumes of network traffic.
Figure extends the idea to three physical networks connected by two routers. Routers make
complex decisions to allow all the users on all the networks to communicate with each other. Not
all networks are directly connected to one another. The router must have some method to handle
this situation.
One option is for a router to keep a list of all computers and all the paths to them. The router
would then decide how to forward data packets based on this reference table. The forwarding is
based on the IP address of the destination computer. This option would become difficult as the
number of users grows. Scalability is introduced when the router keeps a list of all networks, but
leaves the local delivery details to the local physical networks. In this situation, the routers pass
messages to other routers. Each router shares information about which networks it is connected
to. This builds the routing table.
Figure shows the transparency that users require. Yet, the physical and logical structures inside
the Internet cloud can be extremely complex as displayed in Figure. The Internet has grown
rapidly to allow more and more users. The fact that the Internet has grown so large with more
than 90,000 core routes and 300,000,000 end users is proof of the soundness of the Internet
architecture.
Two computers, anywhere in the world, following certain hardware, software, and protocol
specifications, can communicate reliably. Standardization of practices and procedures for moving
data across networks has made the Internet possible.