4 IP Services
4 IP Services
4 IP Services
COM/C/NETWORKJOURNEY
TOPICS COVERED:
IP SERVICES
KEY POINTS:
• NTP is a term which is stands for Network Time Protocol
• NTP is used to allow network devices to synchronize clocks with central source clock.
• NTP make sure logging information and timestamps have the accurate time and date.
• Network Time Protocol (NTP) is runs over User Datagram Protocol [UDP 123]
• Network Time Protocol (NTP) uses a hierarchical system of time sources.
• Network Time Protocol (NTP) uses client-server architecture to work.
• Currently there are two versions of NTP version 3 and NTP version 4.
• A Network Time Protocol (NTP) server is also referred to as an NTP Master.
• Router can be configured in three modes Server, Client and Server/Client mode.
• By default, Router works in Network Time Protocol (NTP) Server/Client mode.
• Stratum defines the reliability and accuracy of Network Time Protocol source.
• Network Time Protocol (NTP) uses of stratum 0 to stratum 15 for NTP sources.
• One 1 is the most reliable and 15 is the worst Network Time Protocol source.
• Stratum 0 represents Atomic Clock and not used in Cisco Router or Cisco Switch.
• Stratum 1 to 15 are valid levels and used in Cisco Routers and Cisco Switches.
• Stratum 16 represents Network Time Protocol (NTP) is not synchronized.
• Default stratum level of Cisco Router’s or Switches internal clock is 8.
• Syslog messages timestamp using the Network Time Protocol (NTP).
NTP Stratum:
• NTP, stratum levels define the distance from the reference clock.
• A Stratum-0 device that is assumed to be most accurate and it has no delay.
• Network Time Protocol Stratum-0 servers cannot be used on the network.
• For example, a device with Network Time Protocol stratum 1 is very accurate device.
• Network Time Protocol (NTP) Stratum 1 might have an atomic clock attached to it.
• Another NTP using stratum 1 server to synchronize own time would be a stratum 2 device.
• Because stratum 2 is one Network Time Protocol (NTP) hop further away from the source.
• Configure multiple NTP servers, client will prefer NTP server with lowest stratum value.
NTP Architecture:
• NTP uses stratums 1 to 16 to define clock accuracy.
• A lower NTP stratum value represents higher accuracy.
• Clocks at NTP stratums 1 through 15 are in synchronized state.
• Clocks at Network Time Protocol stratum 16 are not synchronized.
• Stratums is like TTL number decreases every hop a packet passes by.
NTP Modes:
• Cisco Routers and Cisco Switches can use four (4) different NTP modes.
• NTP Server, NTP Client, NTP Server/Client and NTP Peer or Symmetric Active mode.
NTP Master:
• To make a router to become an authoritative NTP server.
• Where internal devices can synchronize use NTP master command.
• NTP master command tells router that it is an NTP server.
• NTP server is also referred to as an NTP Master.
• If it is using its hardware clock is a reference.
NTP Versions:
• Cisco IOS use many versions, but version 3 & 4 are most commonly used.
• Version 4 supports IPv6 and is backwards compatible with NTP version 3.
• Network Time Protocol (NTP) Version 4 also adds DNS support for IPv6.
• Another difference is that NTPv3 use broadcast messages & NTPv4 use multicast.
• NTPv4 also allows for increased security using public key cryptography and certificates.
Symmetric Active (1): A host operating in this mode sends periodic messages regardless of
the reachability state or stratum of its peer. By operating in this mode the host announces
its willingness to synchronize and be synchronized by the peer.
Symmetric Passive (2): This type of association is ordinarily created upon arrival of a
message from a peer operating in the symmetric active mode and persists only as long as
the peer is reachable and operating at a stratum level less than or equal to the host;
otherwise, the association is dissolved. However, the association will always persist until at
least one message has been sent in reply. By operating in this mode the host announces its
willingness to synchronize and be synchronized by the peer.
Client (3): A host operating in this mode sends periodic messages regardless of the
reachability state or stratum of its peer. By operating in this mode the host, usually a LAN
workstation, announces its willingness to be synchronized by, but not to synchronize the
peer.
Server (4): This type of association is ordinarily created upon arrival of a client request
message and exists only in order to reply to that request, after which the association is
dissolved. By operating in this mode the host, usually a LAN time server, announces its
willingness to synchronize, but not to be synchronized by the peer.
Broadcast (5): A host operating in this mode sends periodic messages regardless of the
reachability state or stratum of the peers. By operating in this mode the host, usually a LAN
time server operating on a high-speed broadcast medium, announces its willingness to
synchronize all of the peers, but not to be synchronized by any of them.
GNS3:
INITIAL CONFIG:
Router(config)#hostname NTPCORE01
NTPCORE01(config)#int e0/0
NTPCORE01(config-if)#ip add dhcp
NTPCORE01(config-if)#no shut
NTPCORE01(config-if)#int e0/3
NTPCORE01(config-if)#ip add 192.168.1.98 255.255.255.0
NTPCORE01(config-if)#no shut
Switch(config)#hostname NTPCLIENT01
NTPCLIENT01(config)#int gi0/0
NTPCLIENT01(config-if)#no switchport
NTPCLIENT01(config-if)#ip add 192.168.1.99 255.255.255.0
NTPCLIENT01(config-if)#no shut
Switch(config)#hostname NTPCLIENT02
NTPCLIENT01(config)#int gi2/0
NTPCLIENT01(config-if)#no switchport
NTPCLIENT01(config-if)#ip add 192.168.1.100 255.255.255.0
NTPCLIENT01(config-if)#no shut
VERIFICATION:
NTPCORE01(config)#do sh clock
04:22:55.983 UTC Wed Jan 1 2020
NTPCLIENT01(config)#do sh clock
*04:22:05.540 UTC Wed Jan 1 2020
NTPCLIENT02(config)#do sh clo
*04:22:24.554 UTC Wed Jan 1 2020
Field Description
characters in * —Synchronized to this peer
display lines # —Almost synchronized to this peer
+ —Peer selected for possible synchronization
- —Peer is a candidate for selection
~ —Peer is statically configured
Address Address of peer.
ref clock Address of reference clock of peer.
St Stratum of peer.
When Time since last NTP packet was received from peer.
Poll Polling interval (in seconds).
Reach Peer reachability (bit string, in octal).
Delay Round-trip delay to peer (in milliseconds).
Field Description
Synchronized System is synchronized to an NTP peer.
Unsynchronized System is not synchronized to any NTP peer.
Stratum NTP stratum of this system.
Reference Address of peer the system is synchronized to.
nominal freq Nominal frequency of system hardware clock.
actual freq Measured frequency of system hardware clock.
Precision Precision of the clock of this system (in Hertz).
reference time Reference time stamp.
clock offset Offset of the system clock to synchronized peer.
root delay Total delay along path to root clock.
root dispersion Dispersion of root path.
peer dispersion Dispersion of synchronized peer.
SYSLOG
• Syslog stands for System Logging, standard protocol used to send system log.
• Cisco network devices Routers and Switches use Syslog to send system messages.
• Cisco network devices use debug output to a local logging process inside the device.
• Syslog is used on a variety of devices to give system information to the system admin.
• Most Cisco devices use the syslog protocol to manage system logs and system alerts.
• Logging can be used for fault notification, network forensics, and security auditing.
• Syslog messages can be output to the console, local buffer or a remote syslog serve.
• Logs can include content flow, configuration changes and new software installs etc.
• Logging helps to detect unusual network traffic, network device failures, issue etc.
• Syslog uses port UDP 514
The service sequence-numbers command was not configured, but the service
timestamps command was configured. The facility is LINK, the severity is 3, and the
MNEMONIC is UPDOWN.
To configure the router to send system messages to a syslog server, complete the following
three steps:
Step 1. Configure the IP address of the syslog server in global configuration mode:
Step 2.
or
Step 3. Optionally, configure the source interface with the logging source-interface
interface-type interface-number global configuration mode command. This specifies that
syslog packets contain the address of a specific interface, regardless of which interface the
packet uses to exit the router. For example, to set the source interface to g0/0, use the
following command:
#Show logging
GNS3
Dynamic Automatically
Host Any computer that is connected to the network
Configuration To configure a host means to provide network information
Protocol Set of rules and regulation
Advantages of DHCP:
• Primary advantage of DHCP is easier management of IP addresses.
• Centralized network client configuration.
• DHCP greatly reduce the time required to configure & reconfigure computers.
• DHCP Server assigning IP addresses automatically avoid configuration errors.
• Ease of adding new clients to a network.
• Reuse of IP addresses reducing the total number of IP addresses.
• No need to reconfigure each client separately.
• Configure the network from a centralized area.
• Easy handling of new users and reuse of IP address can be achieved.
DHCP Client:
• DHCP client is a host using DHCP to obtain configuration parameters.
• The endpoint that receives configuration information from a DHCP server.
DHCP OPERATIONS:
STEP 1:
DHCP client sends out a DHCP Discover message to find out the DHCP server. DHCP discover
message is a layer 2 broadcast as well as layer 3 broadcast.
Hence from the above fields it is clear DHCP Discover message is a Network Layer and Data
Link Layer Broadcast.
STEP 2:
DHCP server receives the DHCP discover message from client and sends back the DHCP offer
message with field information as below:
Src IP: DHCP Server IP Address Dst IP: 255.255.255.255 #Still Broadcast as Client still has
no IP Address#
Hence from above field it is clear that DHCP offer message is a layer 2 unicast but still as
layer 3 broadcast.
STEP 3:
DHCP client receives the DHCP offer from DHCP server and sends back a DHCP Request
message with following fields:
Src IP: 0.0.0.0 #As still the IP address hasn’t been assigned to Client #Dst IP:
255.255.255.255 #Still Broadcast as Client must have received Offer from more than one
DHCP server in their domain and the DHCP client accepts the Offer that its receives the
earliest and by doing a broadcast it intimates the other DHCP server to release the Offered
IP address to their available pool again #
Above fields concludes that DHCP request message is also a layer 2 unicast and a layer 3
broadcast.
STEP4:
Once the DHCP client sends the request to get the Offered IP address, DHCP server responds
with an acknowledge message towards DHCP client with below fields:
From above fields substantiates that DHCP Acknowledge is a layer 2 unicast but still a layer
3 broadcast.
For more details on the information you must get familiar with the DHCP header fields.
Few important fields from DHCP header for our reference are as below –
DHCP IPCONFIG/RENEW
GNS3
1. DHCP-server
ip dhcp pool CCIE123
network 192.168.1.0 255.255.255.0
default-router 192.168.1.100
dns-server 192.168.1.50
ip dhcp excluded-address 192.168.1.100
2. DHCP-client
interface Ethernet0/0
ip address dhcp
end
3. Verifications:
DHCP-server#show ip dhcp pool ccnp
DHCP-server#show ip dhcp server statistics
show ip dhcp pool
DHCP Database
DHCP address pools are stored in non-volatile RAM (NVRAM). There is no limit on the number of
address pools. An address binding is the mapping between the client’s IP and hardware addresses.
The client’s IP address can be configured by the administrator (manual address allocation) or
assigned from a pool by the DHCP server.
Manual bindings are stored in NVRAM. Manual bindings are just special address pools configured by
a network administrator. There is no limit on the number of manual bindings.
Automatic bindings are IP addresses that have been automatically mapped to the MAC addresses of
hosts that are found in the DHCP database. Automatic bindings are stored on a remote host called
the database agent.
A DHCP database agent is any host--for example, an FTP, TFTP, or RCP server--that stores the DHCP
bindings database. The bindings are saved as text records for easy maintenance. You can configure
multiple DHCP database agents and you can configure the interval between database updates and
For any DHCP pool, you can configure a primary subnet and any number of secondary subnets.
Each subnet is a range of IP addresses that the device uses to allocate an IP address to a DHCP client.
The DHCP server multiple subnet functionality enables a Cisco DHCP server address pool to manage
additional IP addresses by adding the addresses to a secondary subnet of an existing DHCP address
pool (instead of using a separate address pool).
CORRECT CONFIG:
ip dhcp pool dhcp_1
network 172.16.1.0 255.255.255.0
network 172.16.2.0 255.255.255.0 secondary
network 172.16.3.0 255.255.255.0 secondary
network 172.16.4.0 255.255.255.0 secondary
!
interface Loopback111
ip address 172.16.1.1 255.255.255.255 secondary
ip address 172.16.2.1 255.255.255.255 secondary
ip address 172.16.3.1 255.255.255.255 secondary
ip address 172.16.4.1 255.255.255.255 secondary
WRONG CONFIG:
ip dhcp pool dhcp_1
network 172.16.1.0 255.255.255.0
lease 1 20 30
accounting default
!
ip dhcp pool dhcp_2
network 172.16.2.0 255.255.255.0
lease 1 20 30
accounting default
!
ip dhcp pool dhcp_3
network 172.16.3.0 255.255.255.0
lease 1 20 30
accounting default
!
ip dhcp pool dhcp_4
network 172.16.4.0 255.255.255.0
lease 1 20 30
accounting default
!
interface Loopback111
ip address 172.16.1.1 255.255.255.255 secondary
ip address 172.16.2.1 255.255.255.255 secondary
ip address 172.16.3.1 255.255.255.255 secondary
ip address 172.16.4.1 255.255.255.255 secondary
An address binding is a mapping between the IP address and MAC address of a client.
Manual bindings are IP addresses that are manually mapped to MAC addresses of hosts that are
found in the DHCP database. Manual bindings are stored in the NVRAM of the DHCP server.
1. enable
2. configure terminal
3. ip dhcp pool pool-name
4. host address [mask | /prefix-length]
5. client-identifier unique-identifier # 01b7.0813.8811.66 [For example, 01b7.0813.8811.66, where
01 represents the Ethernet media type and the remaining bytes represent the MAC address of the
DHCP client]
6. hardware-address hardware-address [protocol-type | hardware-number] # This command is
used for BOOTP requests.
7. client-name name
8. end
The benefit of this feature is that it eliminates the need for a long configuration file and reduces the
space required in NVRAM to maintain address pools.
*time* Jan 21 2005 03:52 PM # Specifies the time the file was created.
*version* 2 # Specifies the database version number.
1. enable
2. configure terminal
3. ip dhcp pool name
4. origin file tftp://10.1.0.1/static-bindings
5. end
6. show ip dhcp binding [address]
Routers, by default, will not forward broadcast packets. Since DHCP client messages use the
destination IP address of 255.255.255.255 (all Nets Broadcast), DHCP clients will not be able to send
requests to a DHCP server on a different subnet unless the DHCP/BootP Relay Agent is configured on
the router.
The DHCP/BootP Relay Agent will forward DHCP requests on behalf of a DHCP client to the DHCP
server.
The DHCP/BootP Relay Agent will append its own IP address to the source IP address of the DHCP
frames going to the DHCP server.
This allows the DHCP server to respond via unicast to the DHCP/BootP Relay Agent.
DHCP Relay Agent is responsible for this job, it gets the broadcast messages and convert them to
Unicast messages and also send them to the remote DHCP server.
Cisco IOS can encapsulate any type of broadcast destined, UDP datagram into a Unicast destined
UDP datagram, that feature is called IP Helper.
By default Cisco IOS/IOS-XE routers do this job for DHCP messages automatically, we can tell the
router to help other UDP packets as well, using the ip forward-protocol udp command:
To forward the BootP/DHCP request from the client to the DHCP server, the ip helper-address
interface command is used
The IP helper-address can be configured to forward any UDP broadcast based on UDP port number.
By default, the IP helper-address will forward the following UDP broadcasts:
DHCPSERVER18
Int e0/0
Ip address 192.168.1.100 255.255.255.0
no shutdown
DHCP-RELAY-AGENT17
Interface gi0/0
no switchport
ip address 192.168.1.99 255.255.255.0
no shutdown
Interface gi0/2
no switchport
ip add 172.16.32.100 255.255.255.0
ip helper-address 192.168.1.100
no shutdown
VERIFICATIONS:
DHCPDISCOVER
When a client boots up for the first time, it is said to be in the Initializing state, and transmits a
DHCPDISCOVER message on its local physical subnet over User Datagram Protocol (UDP) port 67
(BootP server). Since the client has no way of knowing the subnet to which it belongs, the
DHCPDISCOVER is an all subnets broadcast (destination IP address of 255.255.255.255), with a
source IP address of 0.0.0.0. The source IP address is 0.0.0.0, since the client does not have a
configured IP address. If a DHCP server exists on this local subnet and is configured and operating
correctly, the DHCP server will hear the broadcast and respond with a DHCPOFFER message. If a
DHCP server does not exist on the local subnet, there must be a DHCP/BootP Relay Agent on this
local subnet to forward the DHCPDISCOVER message to a subnet that contains a DHCP server.
This relay agent can either be a dedicated host (for example, Microsoft Windows Server), or router
(for example, a Cisco router configured with interface level IP helper statements).
DHCPOFFER
A DHCP server that receives a DHCPDISCOVER message may respond with a DHCPOFFER message on
UDP port 68 (BootP client). The client receives the DHCPOFFER and moves into the Selecting state.
This DHCPOFFER message contains initial configuration information for the client. For example, the
DHCP server will fill in the yiaddr field of the DHCPOFFER message with the requested IP address.
The subnet mask and default gateway are specified in the options field, subnet mask and router
options, respectively. Other common options in the DHCPOFFER message include IP Address lease
time, renewal time, domain name server, and NetBIOS name server (WINS). The DHCP server will
send the DHCPOFFER to the broadcast address, but will include the clients hardware address in the
chaddr field of the offer, so the client knows that it is the intended destination. In the event that the
DHCP server is not on the local subnet, the DHCP server will send the DHCPOFFER, as a unicast
packet, on UDP port 67, back to the DHCP/BootP Relay Agent from which the DHCPDISCOVER came.
The DHCP/BootP Relay Agent will then either broadcast or unicast the DHCPOFFER on the local
subnet on UDP port 68, depending on the Broadcast flag set by the Bootp client.
DHCPREQUEST
After the client receives a DHCPOFFER, it responds with a DHCPREQUEST message, indicating its
intent to accept the parameters in the DHCPOFFER, and moves into the Requesting state. The client
may receive multiple DHCPOFFER messages, one from each DHCP server that received the original
DHCPDISCOVER message. The client chooses one DHCPOFFER and responds to that DHCP server
only, implicitly declining all other DHCPOFFER messages. The client identifies the selected server by
populating the Server Identifier option field with the DHCP server's IP address. The DHCPREQUEST is
also a broadcast, so all DHCP servers that sent a DHCPOFFER will see the DHCPREQUEST, and each
will know whether its DHCPOFFER was accepted or declined. Any additional configuration options
that the client requires will be included in the options field of the DHCPREQUEST message. Even
though the client has been offered an IP address, it will send the DHCPREQUEST message with a
source IP address of 0.0.0.0. At this time, the client has not yet received verification that it is clear to
use the IP address.
DHCPACK
After the DHCP server receives the DHCPREQUEST, it acknowledges the request with a DHCPACK
message, thus completing the initialization process. The DHCPACK message has a source IP address
of the DHCP server, and the destination address is once again a broadcast and contains all the
parameters that the client requested in the DHCPREQUEST message. When the client receives the
DHCPACK, it enters into the Bound state, and is now free to use the IP address to communicate on
the network. Meanwhile, the DHCP server stores the lease in its database and uniquely identifies it
using the client identifier or chaddr, and the associated IP address. Both the client and server will use
this combination of identifiers to refer to the lease. The client identifier is the Mac address of the
device plus the media type.
Before the DHCP client begins using the new address, the DHCP client must calculate the time
parameters associated with a leased address, which are Lease Time (LT), Renewal Time (T1), and
Rebind Time (T2). The typical default LT is 72 hours. You can use shorter lease times to conserve
addresses, if needed.
DHCPNAK
If the selected server is unable to satisfy the DHCPREQUEST message, the DHCP server will respond
with a DHCPNAK message. When the client receives a DHCPNAK message, or does not receive a
response to a DHCPREQUEST message, the client restarts the configuration process by going into the
Requesting state. The client will retransmit the DHCPREQUEST at least four times within 60 seconds
before restarting the Initializing state.
DHCPDECLINE
The client receives the DHCPACK and will optionally perform a final check on the parameters. The
client performs this procedure by sending Address Resolution Protocol (ARP) requests for the IP
address provided in the DHCPACK. If the client detects that the address is already in use by receiving
a reply to the ARP request, the client will send a DHCPDECLINE message to the server and restart the
configuration process by going into the Requesting state.
DHCPINFORM
If a client has obtained a network address through some other means or has a manually configured
IP address, a client workstation may use a DHCPINFORM request message to obtain other local
configuration parameters, such as the domain name and Domain Name Servers (DNSs). DHCP
servers receiving a DHCPINFORM message construct a DHCPACK message with any local
configuration parameters appropriate for the client without allocating a new IP address. This
DHCPACK will be sent unicast to the client.
DHCPRELEASE
A DHCP client may choose to relinquish its lease on a network address by sending a DHCPRELEASE
message to the DHCP server. The client identifies the lease to be released by the use of the client
identifier field and network address in the DHCPRELEASE message. If you need to extend the current
DHCP pool range, remove
the current pool of addresses and specify the new range of IP addresses under the DHCP pool. In
order to remove specific IP addresses or a range of addresses that you want to be in the DHCP pool,
use the command ip dhcp excluded-address .
For example, the TFTP server, which stores the Cisco IOS XE image, can be customized with option
150 to support intelligent IP phones.
Virtual Private Networks (VPNs) allow the possibility that two pools in separate networks can have
the same address space, with private network addresses, served by the same DHCP server. Cisco IOS
XE software supports VPN-related options and suboptions such as the relay agent information
option and VPN identification suboption. A relay agent can recognize these VPN-related options and
suboptions and forward the client-originated DHCP packets to a DHCP server. The DHCP server can
use this information to assign IP addresses and other parameters, distinguished by a VPN identifier,
to help select the VPN to which the client belongs.
NAT/PAT
• NAT is a cisco term which is stand for Network Address Translation.
• NAT is a process that involves translating Private IP into Public IP addresses.
• The process of translating one IP address to another is known as a NAT.
• Router and Firewall is a device, which is used for network Address Translation.
• There are many forms and kinds of Network Address Translation (NAT).
• Network Address Translation used to reduce requirement of the Public IP address.
• Network Address Translation increase security of Internal Computer Networks.
• NAT Translate Private IP into Public IP address & Public IP into Private IP address.
• NAT used to connect a device with Private IP address to the Internet or WAN.
• Network Address Translation hide an organization internal network from external.
• Network Address Translation (NAT) modifies only the Layer 3 header of IP address.
• PAT, User port numbers mapped to single Public IP to differentiate connections
• Port Address Translation (PAT) modifies both the Layer 3 and Layer 4 header of IP.
NAT SCENERIO:
TYPES OF NAT:
1. STATIC NAT
2. DYNAMIC NAT
1. STATIC NAT
2. Dynamic NAT
▪ PAT is the real reason we haven’t run out valid IP address on the Internet
On Cisco IOS routers we can use the ip nat inside sourceand ip nat outside source commands.
Most of us are familiar with the ip nat inside source command because we often use it to translate
private IP addressses on our LAN to a public IP address we received from our ISP.
What about the ip nat outside source command? Does it work in the same way as ip nat inside
source?
• Translates the source IP address of packets that travel from inside to outside.
• Translates the destination IP address of packets that travel from outside to inside.
• Translates the source IP address of packets that travel from outside to inside.
• Translates the destination IP address of packets that travel from inside to outside.
GNS3:
REQURIED CONFIG:
MOSCOWR19:
interface Ethernet0/1.40
encapsulation dot1Q 40
ip address 172.16.40.1 255.255.255.0
end
interface Ethernet0/2
ip address dhcp
end
PRE-CHECKS:
MOSCOWR19(config)#
int e0/2
ip nat outside
MOSCOWR19(config)#
int e0/2
ip nat outside
VALIDATION:
MOSCOWR19#sh ip nat translations
Pro Inside global Inside local Outside local Outside global
icmp 50.1.1.1:19968 172.16.40.10:19968 8.8.8.8:19968 8.8.8.8:19968
--- 50.1.1.1 172.16.40.10 --- ---
icmp 50.1.1.2:16128 172.16.40.20:16128 8.8.8.8:16128 8.8.8.8:16128
--- 50.1.1.2 172.16.40.20 --- ---
MOSCOWR19(config)#
int e0/2
ip nat outside
VALIDATIONS:
MOSCOWR19# sh ip nat translations
Pro Inside global Inside local Outside local Outside global
icmp 50.1.1.2:20224 172.16.40.10:20224 8.8.8.8:20224 8.8.8.8:20224
icmp 50.1.1.2:20480 172.16.40.10:20480 8.8.8.8:20480 8.8.8.8:20480
icmp 50.1.1.2:16384 172.16.40.20:16384 8.8.8.8:16384 8.8.8.8:16384
icmp 50.1.1.2:16640 172.16.40.20:16640 8.8.8.8:16640 8.8.8.8:16640
R1 Basic Configuration
R1(config)#interface f0/0 R1(config)#interface f0/1
R1(config-if)#ip address dhcp R1(config-if)#ip add 192.168.0.100 255.255.255.0
R1(config-if)#no shutdown R1(config-if)#no shutdown
R1(config)#ip name-server 8.8.8.8 R1(config)#ip domain-lookup
After send the traffic from 192.168.01, it is translated and send the traffic outside.
PAT Configuration on R1
R1(config)#access-list 1 permit 192.168.0.0 0.0.0.255
R1(config)#ip nat pool mypool 192.168.169.139 192.168.169.139 netmask 255.255.255.0
R1(config)#ip nat inside source list 1 pool mypool overload
R1(config)#interface f0/0
R1(config-if)#ip nat outside
R1(config-if)#interface f0/1
R1(config-if)#ip nat inside
Imagine you have a large network that has many switches and routers, a dozen servers and
hundreds of workstations…wouldn’t it be great if you could monitor all those devices somehow?
Using a NMS (Network Management System) it’s possible to monitor all devices in your network.
Whenever something bad happens (like an interface that goes down) you will receive an e-mail or
text message on your phone so you can respond to it immediately.
Sounds good?
Back in the 80s, some smart folks figured out that we should have something to monitor all IP based
network devices. The idea was that most devices like computers, printers, and routers share some
characteristics. They all have an interface, an IP address, a hostname, buffers and so on.
They created a database with variables that could be used to monitor different items of network
devices and this resulted in SNMP (Simple Network Management Protocol).
SNMP runs on the application layer and consists of a SNMP manager and a SNMP agent. The SNMP
manager is the software that is running on a pc or server that will monitor the network devices, the
SNMP agent runs on the network device.
The database that I just described is called the MIB (Management Information Base) and an
object could be the interface status on the router (up or down) or perhaps the CPU load at a certain
moment. An object in the MIB is called an OID (Object Identifier).
The SNMP manager will be able to send periodic polls to the router and it will use store this
information. This way it’s possible to create graphs to show you the CPU load or interface load from
the last 24 hours, week, month or whatever you like.
It’s also possible to configure your network devices through SNMP. This might be useful to configure
a large number of switches or routers from your network management system so you don’t have to
telnet/ssh into each device separately to make changes.
The packet that we use to poll information is called a SNMP GET message and the packet that is
used to write a configuration is a SNMP SET message.
SNMP Manager:
• A software that runs on the device of the Network administrator System.
• A Computer to monitor network, also called Network Management System.
SNMP Agent:
• A software runs on network devices that we want to monitor router, firewall, etc.
The ifIndex value is a unique identifying number associated with a physical or logical interface
SNMP Messages:
• SNMP Messages are used to communicate between the SNMP Manager and Agents.
• SNMPv1 supports five basic SNMP messages Get, Get-Next, Get-Response, Set & Trap.
• SNMPv2c, two new messages were added Inform and Getbulk.
• GET Messages are sent by the SNMP Manager to retrieve info from SNMP Agents.
• SET Messages are used by the SNMP Manager to assign the value to SNMP Agents.
• GET-NEXT retrieves the value of the next object in the MIB.
• GET-RESPONSE Message is used by SNMP Agents to reply to GET & GET-NEXT messages.
• TRAP Messages are initiated from the SNMP Agents to inform the SNMP Manager on event.
• Inform Message, SNMP Manager acknowledge that the message has been received.
• Getbulk operation efficiently retrieve large blocks of data, such as multiple rows in a table.
SNMPv1:
• SNMP version 1 security is based on community strings.
• An SNMP community string can be considered as password.
SNMPv2c:
• SNMPv2c is an update SNMPv2 and SNMPv2c.
• SNMPv2c uses the community-based security model of SNMPv1.
• SNMPv2c "c" in SNMPv2c stands for "community".
• SMMPv2c sends the community strings in clear text.
SNMPv3:
• SNMPv3 is the most secure version among other SNMP versions.
• SNMPv3 provides secure access to devices using authentication & encryption.
• Authentication security feature makes sure that the message is from a valid source.
• Integrity security feature makes sure that the message has not been tampered.
• Encryption security feature provides confidentiality by encrypting the contents.
• SNMPv3 will never send the user password in the clear text.
• SNMPv3 uses the SHA1 or MD5 hash-based authentication.
• SNMPv3 encryption is done using the AES, 3DES and DES.
• SNMP offers three security levels: noAuthNoPriv, AuthNoPriv and AuthPriv.
• Auth stands for Authentication and Priv for Privacy.
• NoAuthNoPriv = username authentication and no encryption.
• AuthNoPriv = authentication (md5/sha) but no encryption.
• AuthPriv = authentication (md5/sha) AND encryption(aes/3des/des).
SNMPv1 and SNMPv2 only support noAuthNoPriv since they don’t offer any authentication or
encryption. SNMPv3 supports any of the three security levels. When you decide to use
noAuthNoPriv for SNMPv3 then the username will replace the community-string.
To give you an example of what a NMS looks like, I’ll show you some screenshots of Observium.
Observium is a free SNMP based network monitoring platform which can monitor Cisco, Linux,
Windows and some other devices. It’s easy to install so if you never worked with SNMP or
monitoring network devices before I can highly recommend giving it a try. You can download it
at http://www.observium.org.
Above you see an overview of all the devices that our NMS manages. There are two linux devices,
two Cisco devices and there’s a VMWare ESXi server. You can see the uptime of all devices.
This switch is called “mmcoreswitch01” and it’s a Cisco Catalyst 3560E. It gives us a nice overview of
the CPU load, the temperature and the interfaces that are up or down.
Here’s the temperature of this switch from the last month. When the temperature exceeds a certain
value (let’s say 50 degrees Celcius) then we can tell our NMS to send us an e-mail.
Here’s an overview of the VLAN 10 interface. You can see how much traffic is sent and received on
this interface. We can zoom in one one the graphs if we want:
This gives a nice overview of how much traffic was sent in the last 24 hours of this particular
interface.
I hope this gives you an idea of what a NMS looks like and why this might be useful. If you want to
take a look at Observium yourself you can use the live demo on their website:
http://demo.observium.org/
All the information that Observium shows us is retrieved by using SNMP GET messages:
Besides using SNMP GET messages, a SNMP agent can also send SNMP traps. A trap is a notification
that it sent immediately as soon as something occurs, for example, an interface that goes down:
We can use an NMS to monitor one of our network devices but how do we exactly know what to
monitor? There are so many things we could check for…a single interface on a router has over 20
things we could check: input/output errors, sent/received packets, interface status, and so on. Each
of these things to check has a different OID (Object Identifier).
Since there are so many OIDs, the MIB is organized into a hierarchy that looks like a tree. In this tree,
you will find a number of branches with OIDs that are based on RFC standards but you will also find
some vendor specific variables. Cisco, for example, has variables to monitor EIGRP and other Cisco
protocols.
Let me give you an example of this tree by showing where the ‘hostname’ and ‘domainname’
objects are located. These objects can be used to discover the hostname and domainname of the
router.
The tree starts with the “iso” branch and then we drill our way down to org, dod, internet, private,
enterprises, cisco, local, lcpu and there we find the hostname and domainname objects. Note that
the branches have numbers…instead of typing out the names I can just use the numbers.
1.3.6.1.4.1.9.2.1.3 will be used to get information about the hostname and 1.3.6.1.4.1.9.2.1.4 for the
domainname.
The MIB is huge and knowing where to find the right objects can be troublesome, that’s why most
NMSes have a nice GUI that lets you select the things you want to monitor without having to worry
about the object numbers.
If you want to test SNMP you don’t have to install a NMS, you can use SNMPGET which is a free tool
that you can download here:
https://sourceforge.net/projects/net-snmp/
Here’s an example of SNMPGET where I use a linux host to query a router that has been configured
for SNMP:
The community string that I used is CISCO@123, the IP address of the router is 192.168.32.200 and
the object I’m interested in is 1.3.6.1.4.1.9.2.1.3. As a result, the router reports its hostname. Here’s
another example for the domainname:
GNS3
1. SNMP v2c
Switch(config)#
snmp-server community cisco ro (optional)
snmp-server community cisco rw(optional)
snmp-server location India (optional)
snmp-server contact info@networkjourney.com (optional)
snmp-server community cisco@123
snmp-server host 192.168.1.151 version 2c cisco@123
snmp-server enable traps
If I use the snmp-server enable traps command it will enable all SNMP traps:
If you want to test this with a SNMP server then I can highly recommend to take a look at
Observium. They offer a free “community” edition of their network monitoring software that
supports many network devices out of the box (Cisco included).
Verification:
2. SNMP v3
SNMPV3 AUTHPRIV
Switch (config)# snmp-server group group1 v3 priv
Switch (config)# snmp-server user user1 group1 v3 auth md5 MYPASS123 priv aes 128 MYKEY123
Switch (config)# snmp-server enable traps
Switch (config)# snmp-server host 1.1.1.10 user1
Above you can see that we have our group called “user1” and that we use the default read view.
If you are a Linux user you can use the excellent snmpwalk command-line utility that tests if
your router can be accessed using SNMP. It works for SNMPv1, v2 and v3:
As you can see snmpwalk is able to extract information from my router. We’ll add the router to
a NMS now. I’m using Observium which is an excellent free and open source NMS. If your
environment has a lot of Cisco or Linux devices then I can highly recommend to give it a try:
Once Routers are added to SNMP MANAGER (observium), we can analyse various aspects w.r.t it:
MULTICAST
There are three types of traffic that we can choose from for our networks:
• Unicast
• Broadcast
• Multicast
If you want to send a message from one source to one destination, we use unicast. If you want to
send a message from one source to everyone, we use broadcast.
What if we want to send a message from one source to a group of receivers? That’s when we
use multicast.
Why do you want to use multicast instead of unicast or broadcast? That’s best explained with an
example. Let’s imagine that we want to stream a high definition video on the network using unicast,
broadcast or multicast. You will see the advantages and disadvantages of each traffic type. Let’s start
with unicast:
Above we have a small network with a video server that is streaming a movie and four hosts who
want to watch the movie. Two hosts are on the same LAN, the other two hosts are on another site
that is connected through a 30 Mbit WAN link.
A single HD video stream requires 6 Mbps of bandwidth. When we are using unicast, the video
server will send the packets to each individual host. With four hosts, it means the video server will
be streaming 4x 6Mbps = 24Mbps of traffic.
Each additional host that wants to receive this video stream will put more burden on the video
server and requires more bandwidth from the WAN link. Right now, we require 2x 6Mbps of
bandwidth for H3 and H4. When four more hosts would join on the right side, our WAN link would
be completely saturated.
What about the LAN on the left side? If these are Gigabit links then a couple of hosts watching a
movie will be no problem. What if there’s 150 users that want to watch the movie? That’s when we
start running into issues.
The main problem with unicast traffic is that it is not scalable. Are there any advantages? It’s simple
since unicast works “out of the box”. You will see that multicast requires some additional protocols
to make it work. Also, multicast only supports UDP traffic so we can’t use the advantages of TCP like
windowing and acknowledgments.
If our video server would broadcast its traffic then the load on the video server will be reduced, it’s
only sending the packets once. The problem however is that everyone in the broadcast domain will
receive it…whether they like it or not. Another issue with broadcast traffic is that routers do not
forward broadcast traffic, it will be dropped.
Multicast traffic is very efficient. This time we only have two hosts that are interested in receiving
the video stream. The video server will only send the packets once, the switches and routers will
only forward traffic to the hosts that want to receive it. This reduces the load of the video server and
network traffic in general.
When using unicast, each additional host will increase the load and traffic rate. With multicast it will
remain the same:
NOTE:
What about the Internet? Since multicast is so much more efficient than unicast, large
companies like Netflix and Youtube must be using this to stream videos right? Unfortunately
multicast on the Internet has never really been implemented. These large video companies use
LOTS of unicast traffic to deliver videos to their customers. The only place where you might see
multicast on the Internet is your local ISP. They typically use multicast for IPTV to deliver video to
their own customers.
Multicast Components
Multicast is efficient but it doesn’t work “out of the box”. There are a number of components that
we require:
2. Multicast Application:
We also require applications that support multicast. A simple example is the VLC mediaplayer, it can
be used to stream and receive a video on the network.
Above you can see the router is receiving the multicast traffic from the video server. It doesn’t
know where and if it should forward this multicast traffic. We need some mechanism on our hosts
that tell the router when they want to receive multicast traffic. We use the IGMP (Internet Group
Management Protocol) for this. Hosts that want to receive multicast traffic will use the IGMP
protocol to tell the router which multicast traffic they want to receive.
IGMP helps the router to figure out on what interfaces it should forward multicast traffic but what
about switches? Take a look at the following image:
Our router knows that it has to forward the multicast traffic since a host used IGMP to tell the router
it is interested. Once the multicast traffic arrives at the switch, we have another problem. Switches
learn MAC addresses by looking at the source address of an Ethernet frame. Since we use multicast
addresses only for the destination, how is the switch supposed to learn where to forward multicast
traffic to?
Above we have our video server that is forwarding multicast traffic to R1. On the bottom there’s H1
who is interested in receiving it.
With unicast routing, each router advertises its directly connected interfaces in a routing protocol.
Routers who receive unicast packets only care about the destination address. They check their
routing tables, find the outgoing interface and forward the packets towards the destination. With
multicast routing, things are not that simple…the destination is a multicast group address and the
multicast packets have to be forwarded to multiple receivers throughout the network.
One of the differences between unicast and multicast IP addresses, is that unicast IP addresses
represent a single network device while multicast IP addresses represent a group of receives. IANA
has reserved the class D range to use for multicast. The first 4 bits in the first octet are 1110 in binary
which means that we have the 224.0.0.0 through 239.255.255.255 range for IP multicast addresses.
224.0.0.0
239.255.255.255
Some of the addresses are reserved however and we can’t use them for our own applications.
The 224.0.0.0 – 224.0.0.255 range has been reserved by IANA to use for network protocols. All
multicast IP packets in this range are not forwarded by routers between subnets. Let me give you an
overview of reserved link-local multicast addresses, I’m sure you recognize some of the protocols:
Address Usage
224.0.0.3 Unassigned
224.0.0.7 ST Routers
224.0.0.8 ST Hosts
224.0.0.18 VRRP
You probably recognized OSPF (224.0.0.5 and 224.0.0.6), RIPv2 (224.0.0.9) and EIGRP
(224.0.0.10). Once you dive more into multicast you will also encounter PIM (Protocol Independent
Multicast) with 224.0.0.13.
IANA also reserved the 224.0.1.0 /24 range for certain applications. Everything in the 224.0.1.0 /24
range can be routed however, unlike the 224.0.0.0 /24 range. Here’s an overview:
Address Usage
224.0.1.1 NTP
224.0.1.2 SGI-Dogfight
224.0.1.3 Rwhod
224.0.1.6 NSS
224.0.1.32 Mtrace
224.0.1.33 RSVP-encap-1
224.0.1.34 RSVP-encap-2
224.0.1.39 Cisco-RP-Announce
224.0.1.40 Cisco-RP-Discovery
224.0.1.52 Mbone-VCR-Directory
Many of these applications are never used, if you work with Cisco multicast you will see 224.0.1.39
and 224.0.1.40 when you configure a RP (Rendezvous Point).
Just make sure you don’t use the 224.0.0.0 /24 and 224.0.1.0 /24 range and you will be safe. Just like
private and public IP addresses for unicast, IANA has reserved a range of IP addresses that we can
use for multicast on our local networks. This is the 239.0.0.0 /8 range. Everything between 239.0.0.0
– 239.255.255.255 is safe to use on your own networks.
Multicast IP address live in the 224.0.0.0 – 239.255.255.255 range but what about MAC
addresses and Ethernet frames? What do we do on layer 2 to make multicast work? Let me
show you an example of a MAC address:
Above you see an example of a MAC address. In the first octet, bit 0 has been reserved for
broadcast or multicast traffic. When we have unicast traffic, this bit will be set to 0. For
broadcast or multicast traffic this bit will be set to 1.
On layer 3 IANA has reserved the class D range (224.0.0.0 – 239.255.255.255) for multicast IP
addresses. What about layer 2? What MAC addresses do we use for multicast traffic?
For layer 2 we also have a reserved prefix to use for multicast traffic. The 24-bit MAC address
prefix 01-00-5E is reserved for layer 2 multicast. Unfortunately only half of the MAC addresses
in this 24-bit prefix can be used for multicast, this means we only have 23 bits of MAC address
space to use for multicast. Here’s an illustration:
As you can see the first 3 octets are 01-00-5E. This is the reserved range. This means there are
8+8+8 = 24 bits left for us to use. I just told you that only half of this 24-bit space is available to
us which means that only 23 bits can be used. Why can we only use 23 bits?
IGMP Version 2
IGMP (Internet Group Management Protocol) version 1 is the first version that hosts can use to
announce to a router that they want to receive multicast traffic from a specific group. It’s a simple
protocol that uses only two messages:
• Membership report
• Membership query
When a host wants to join a multicast group, it will send a membership report to the group address
that it wants to receive. When the multicast-enabled router receives this message, it will start
forwarding the requested multicast traffic on the interface where it received the IGMP membership
report on.
The router will periodically send a membership query to destination 224.0.0.1 (all hosts multicast
group address). Hosts that receive this message will respond with a membership report to tell the
router that they are still interested in receiving the multicast traffic. When the router receives the
membership report, it’s expiry timer will be refreshed. When no hosts respond, the router knows
that nobody is interested anymore in the multicast traffic and it will then remove the entry once the
timer exceeds.
IGMP version 2 is the “enhanced” version of IGMP version 1. One of the major reasons for a new
version was to improve the “leave” mechanism. In IGMP version 1, hosts just stop listening to the
multicast group address but they never report this to the router. Here are the new features:
• Leave group messages: when a host no longer wants to listen to a multicast group address then
it will report to the router that it has stopped listening.
• Group specific membership query: the router is now able to send a membership query for a
specific group address. When the router receives a leave group message, it will use this query to
check if there are still any hosts interested in receiving the multicast traffic.
• MRT (Maximum Response Time) field: this is a new field in query messages. It specifies how
much time hosts have to respond to the query.
• Querier election process: when there are two routers in the same subnet then only one of them
should send query messages. The election ensures only one router becomes the active querier.
The router with the lowest IP address becomes the active querier.
The main goal of a router is to route packets. In other words: when it receives an IP packet it has
to look at the destination address, check the routing table and figure out the next hop where to
forward the IP packet to. We use routing protocols to learn different networks and to fill the
routing table.
To route our multicast traffic, we need to use a multicast routing protocol. There are two types
of multicast routing protocols:
• Dense Mode
• Sparse Mode
Dense Mode
Dense mode multicast routing protocols are used for networks where most subnets in your
network should receive the multicast traffic. When a router receives the multicast traffic, it will
flood it on all of its interfaces except the interface where it received the multicast traffic on.
Here’s an example:
Above we have a video server sending multicast traffic to R1. When R1 receives these packets, it
will flood them on all of its interfaces. R2 and R3 will do the same so our two hosts (H2 and H3)
will receive the multicast traffic. In the example above both of our hosts are interested in the
multicast traffic but what if there are hosts that don’t want to receive it?
A multicast router can tell its neighbor that it doesn’t want to receive the multicast traffic
anymore. This happens when:
• The router doesn’t have any downstream neighbors that require the multicast traffic.
• The router doesn’t have any hosts on its directly connected interface that require the
multicast traffic.
Above we see R1 that receives the multicast traffic from our video server. It floods this multicast
traffic to R2 and R3 but these two routers don’t have any interest in the multicast traffic. They
will send a prune message to signal R1 that it should no longer forward the multicast traffic.
Sparse Mode
At this moment you might be thinking that dense mode is very inefficient with its flooding of
multicast traffic. When you only have a few receivers on your network then yes, you will be
wasting a lot of bandwidth and resources on your routers.
The alternative is sparse mode which is far more efficient. Sparse mode multicast routing
protocols only forward the multicast traffic when another router requests it. It’s the complete
opposite of dense mode:
• Dense mode floods multicast traffic until a router asks you to stop.
• Sparse mode sends multicast traffic only when a router requests it.
Requesting multicast traffic sounds great but it introduces one problem…where are you going to
send your request to? With dense mode, you will receive the traffic whether you like it or not.
With sparse mode…you have no idea where the multicast traffic should come from.
To fix this issue, sparse mode uses a special router called the RP (Rendezvous Point).
All multicast traffic is forwarded to the RP and when other routers want to receive it, they’ll
have to find their way towards the RP.
Above we see R1 which is the RP for our network. It’s receiving multicast traffic from the video
server but at the moment nobody is interested in it. R1 will not send any multicast traffic on the
network at this moment.
Here’s when a router will request multicast traffic from another router:
• When the router has received an IGMP join message from a host that is directly
connected.
GNS3 LAB:
First, we have to enable multicast routing and PIM on the interface otherwise the router won’t
process IGMP traffic.
NTPCORE01(config)#
ip multicast-routing
interface Ethernet 0/2
no switchport
ip address 192.168.1.98 255.255.255.0
ip pim dense-mode
ip igmp version 2
no shutdown
Above we can see that IGMP is enabled and that our router is the quering router. There are no other
routers so that’s an easy way to win the election.
Before we let the hosts join a multicast group, let’s enable debugging on all devices:
NTPCORE01#
*Aug 31 16:50:31.567: IGMP(0): Send v2 general Query on Ethernet0/3
*Aug 31 16:50:31.567: IGMP(0): Set report delay time to 5.6 seconds for 224.0.1.40 on
Ethernet0/3
Just like IGMP version 1, the router is now sending general membership queries every 60
seconds. This is what it looks like in wireshark:
Above you can see the destination which is 224.0.0.1 (all hosts multicast group address). Let’s
configure our first host to join a multicast group:
HOST1(config)#
interface gi0/0
no switchport
ip add 192.168.1.101 255.255.255.0
ip igmp join-group 224.0.1.40
HOST2(config)#
interface gi0/0
ip add 192.168.1.102 255.255.255.0
ip igmp join-group 224.0.1.40
HOST2(config)#
interface gi0/0
no ip igmp join-group 224.0.1.40
R#show ip mroute
R#show ip mfib
R#show ip igmp groups 224.0.1.40 detail
IGMP Version 3
IGMP version 3 adds support for “source filtering”. IGMP version 1 and version 2 allow hosts to
join multicast groups but they don’t check the source of the traffic. Any source is able to receive
traffic to the multicast group(s) that they joined.
With source filtering, we can join multicast groups but only from specified source
addresses. IGMP version 3 is a requirement for SSM (Source Specific Multicast)
Above we have a video server that is streaming multicast traffic on the network using
destination address 239.1.1.1. There are four hosts listening to this traffic, life is good. Suddenly
something happens:
An attacker didn’t like the video stream and decided to stream his favorite video to destination
address 239.1.1.1.1. Since we don’t check the source address, everyone will receive the traffic
from our attacker. It’s also possible to send bogus traffic and create a DoS attack like this.
With IGMP version 3, our hosts can be configured to receive multicast traffic only from specified
source addresses. Let’s see how this works, I’ll use the following topology for this:
GNS3:
NTPCORE01(config)#
ip multicast-routing
interface e0/2
no switchport
ip add 192.168.1.1 255.255.255.0
ip pim dense-mode
ip igmp version 3
Our router requires multicast routing and PIM should be enabled on the interface. The default
version of IGMP is 2 so we’ll change it to version 3. Before we let H1 join a multicast group, let’s
enable debugging on both devices:
HOST1(config)#
interface GigabitEthernet 0/0
no switchport
ip address 192.168.1.101 255.255.255.0
ip igmp version 3
ip igmp join-group 224.0.1.40 source 1.1.1.1
HOST1 sends two membership report messages. The first message includes the multicast group
address and source address that we want to receive. The second message includes the “mode”.
There are two modes:
• Include: this is a list of source addresses that we accept multicast traffic from, everything
else should not be forwarded.
• Exclude: this is a list of source addresses that we refuse to accept multicast traffic from,
everything else should be forwarded.
LEAVE MECHANISM
HOST1(config-if)#
no ip igmp join-group 224.0.1.40 source 1.1.1.1
R#show ip mroute
R#show ip mfib
R#show ip igmp groups 224.0.1.40 detail
Network devices don’t really care about the type of traffic they have to forward. Your switch
receives an Ethernet frame, looks for the destination MAC address and forwards the frame
towards the destination. The same thing applies to your router, it receives an IP packet, looks
for the destination in the routing table and it forwards the packet towards the destination.
Does the frame or packet contain data from a user downloading the latest songs from Spotify or
is it important speech traffic from a VoIP phone? The switch or router doesn’t really care.
This forwarding logic is called best effort or FIFO (First In First Out). Sometimes, this can be an
issue. Here is a quick example:
Above we see a small network with two routers, two switches, two host devices and two IP
phones. We use Gigabit Ethernet everywhere except between the two routers; this is a slow
serial link of, let’s say 1.54 Mbps.
When the host and IP phone transmit data and voice packets destined for the host and IP phone
on the other side, it is likely that we get congestion on the serial link. The router will queue
packets that are waiting to be transmitted but the queue is not unlimited. What should the
router do when the queue is full? drop the data packets? the voice packets? When you drop
voice packets, the user on the other side will complain about poor voice quality. When you drop
data packets, a user might complain that transfer speeds are poor.
QoS is about using tools to change how the router or switch deals with different packets. For
example, we can configure the router so that voice traffic is prioritized before data traffic.
In this lesson, I’ll give you an overview of what QoS is about, the problems we are trying to solve
and the tools we can use.
There are four characteristics of network traffic that we must deal with:
• Bandwidth
• Delay
• Jitter
• Loss
Bandwidth is the speed of the link, in bits per second (bps). With QoS, we can tell the router
how to use this bandwidth. With FIFO, packets are served on a first come first served basis. One
of the things we can do with QoS is create different queues and put certain traffic types in
different queues. We can then configure the router so that queue one gets 50% of the
bandwidth, queue two gets 20% of the bandwidth and queue three gets the remaining 30% of
the bandwidth.
Delay is the time it takes for a packet to get from the source to a destination, this is called
the one-way delay. The time it takes to get from a source to the destination and back is called
the round-trip delay. There are different types of delay; without going into too much detail, let
me give you a quick overview:
• Processing delay: this is the time it takes for a device to perform all tasks required to
forward the packet. For example, a router must do a lookup in the routing table, check
its ARP table, outgoing access-lists and more. Depending on the router model, CPU, and
switching method this affects the processing delay.
• Queuing delay: the amount of time a packet is waiting in a queue. When an interface is
congested, the packet will have to wait in the queue before it is transmitted.
• Serialization delay: the time it takes to send all bits of a frame to the physical interface
for transmission.
• Propagation delay: the time it takes for bits to cross a physical medium. For example,
the time it takes for bits to travel through a 10 mile fiber optic link is much lower than
the time it takes for bits to travel using satellite links.
Some of these delays, like the propagation delay, is something we can’t change. What we can do
with QoS however, is influence the queuing delay. For example, you could create a priority
queue that is always served before other queues. You could add voice packets to the priority
queue so they don’t have to wait long in the queue, reducing the queuing delay.
Jitter is the variation of one-way delay in a stream of packets. For example, let’s say an IP phone
sends a steady stream of voice packets. Because of congestion in the network, some packets are
delayed. The delay between packet 1 and 2 is 20 ms, the delay between packet 2 and 3 is 40 ms,
the delay between packet 3 and 4 is 5 ms, etc. The receiver of these voice packets must deal
with jitter, making sure the packets have a steady delay or you will experience poor voice
quality.
Loss is the amount of lost data, usually shown as a percentage of lost packets sent. If you send
100 packets and only 95 make it to the destination, you have 5% packet loss. Packet loss is
always possible. For example, when there is congestion, packets will be queued but once the
queue is full…packets will be dropped. With QoS, we can at least decide which packets get
dropped when this happens.
Traffic Types
With QoS, we can change our network so that certain traffic is preferred over other traffic when
it comes to bandwidth, delay, jitter and loss. What you need to configure however really
depends on the applications that you use. Let’s take a closer look at different applications and
traffic types.
Batch Application
Let’s start with a simple example, a user that wants to download a file from the Internet.
Perhaps the latest IOS image:
Let’s think about how important bandwidth, delay, jitter, and loss are when it comes to
downloading a file like this.
The file is 103.92 MB or 108967472 bytes. An IP packet is 1500 bytes by default, without the IP
and TCP header there are 1460 bytes left for the TCP segment. It would take 108967472 / 1460
= ~74635 IP packets to transfer this file to your computer.
Bandwidth is nice to have, it makes the difference between having to wait a few seconds,
minutes or a few days to download a file like this.
What about delay? There is a one-way delay to get the data from the server to your computer.
When you click on the download link, it might take a short while before the download starts.
Once the packets come in, it doesn’t really matter much what the delay is or the variation of
delay (jitter) between the packets. You are not interacting with the download, just waiting for it
to complete.
What about packet loss? File transfers like these use TCP and when some packets are lost, TCP
will retransmit your data, making sure the download makes it completely to your computer.
An application like your web browser that downloads a file is a non-interactive application,
often called a batch application or batch transfer. Bandwidth is nice to have since it reduces the
time to wait for the download to complete. Delay, jitter and loss don’t matter much. With QoS,
we can assign enough bandwidth to applications like these to ensure downloads complete in
time and reducing packet loss to a minimum to prevent retransmissions.
Interactive Application
Another type of application is the interactive application. A good example is when you use
telnet or SSH to access your router or switch:
These applications don’t require a lot of bandwidth but they are somewhat sensitive to delay
and packet loss. Since you are typing commands and waiting for a response, a high delay can be
annoying to work with. If you ever had to access a router through a satellite link, you will know
what I’m talking about. Satellite links can have a one-way delay of between 500-700ms which
means that when you type a few characters, there will be a short pause before you see the
characters appear on your console.
With QoS, we can ensure that in case of congestion, interactive applications are served before
bandwidth-hungry batch applications.
Above we have a user that is speaking. With VoIP, we use a codec that processes the analog
sound into a digital signal. The analog sound is digitized for a certain time period which is usually
20 ms. With the G711 codec, each 20 ms of audio is 160 bytes of data.
The phone will then create a new IP packet with an UDP and RTP (Realtime Transport Protocol)
header, adds the voice data to it and forwards the IP packet to the destination. The IP, UDP and
RTP header add 40 bytes of overhead so the IP packet will be 200 bytes in total.
For one second of audio, the phone will create 50 IP packets. 50 IP packets * 200 bytes = 10000
bytes per second. That’s 80 Kbps. The G.729 codec requires less bandwidth (but with reduced
audio quality) and requires only about 24 Kbps.
Bandwidth isn’t much of an issue for VoIP but delay is. If you are speaking with someone on the
phone, you expect it to be real-time. If the delay is too high, the conversation becomes a bit like
a walkie talkie conversation where you have to wait a few seconds before you get a reply. Jitter
is an issue because the codec expects a steady stream of IP packets with voice data that it
must convert back into an analog signal. Codecs can work a bit around jitter but there are
limitations.
Packet loss is also an issue, too many lost packets and your conversations will have gaps in it.
Voice traffic on a data network is possible but you will need QoS to ensure there is enough
bandwidth and to keep the delay, jitter and packet loss under control. Here are some guidelines
you can follow for voice traffic:
(Interactive) video traffic has similar requirements to voice traffic. Video traffic requires more
bandwidth than voice traffic but this really depends on the codec and the type of video you are
streaming. For example, if I record a video of my router console, 90% of the screen remains the
same. The background image remains the same, only the text changes every now and then. A
video with a lot of action, like a sports video, requires more bandwidth. Like voice traffic,
interactive video traffic is sensitive to delay, jitter and packet loss. Here are some guidelines:
QoS Tools
We talked a bit about why we need QoS and different application types that have different
requirements. Now let’s talk about the actual tools we can use to implement QoS:
Classification can be done in a number of ways. One common way to do it is to use an access-list
and match on certain values in the IP packet like the source and/or destination addresses or
port numbers. For example, an access-list that matches on TCP destination port 80 is a quick
way to classify all HTTP traffic.
Once the traffic is classified, it’s best practice to mark the packet.
Marking means we change one or more of the header fields in a packet or frame. For example,
an IP packet has the ToS (Type of Service) field that we can use to mark the packet:
Ethernet frames don’t have such field but we do have something for trunks. The tag that is
added by 802.1Q has a priority field:
Above we see a switch with two hosts and one phone. The switch receives a number of packets
from the hosts and phone and is configured to classify these packets using an access-list on its
interfaces. The switch then marks the IP packets using the ToS field in the IP header.
The reason that we use marking is that sometimes classification requires some complex access-
lists / rules and can degrade performance on the router or switch that is doing classification. In
the example above, the router receives marked packets so it doesn’t have to do complex
classification using access-lists like the switch. It will still do classification but only has to look for
marked packets.
GNS3 (CLASSIFICATION)
AGGRSW01(config)#
ip access-list extended TELNET
permit tcp any any eq 23
class-map TELNET
match access-group name TELNET
policy-map CLASSIFY
class TELNET
int e0/2
service-policy input CLASSIFY
!
line vty 0 4
Transport input all
Login local
SW#telnet 192.168.1.1
Trying 192.168.1.1 ... Open
User Access Verification
Username:
[Connection to 192.168.1.1 closed by foreign host]
GNS3 (MARKING)
AGGRSW01(config)#
ip access-list extended TELNET
permit tcp any any eq 23
class-map TELNET
match access-group name TELNET
policy-map CLASSIFY
class TELNET
set precedence network
int e0/2
service-policy input CLASSIFY
SW#telnet 192.168.1.1
Trying 192.168.1.1 ... Open
User Access Verification
Username:
[Connection to 192.168.1.1 closed by foreign host]
That’s looking good! 10 packets have been marked with precedence 7. That’s not too bad right?
Let’s see if we can also mark some packets with a DSCP value, let’s mark some HTTP traffic:
AGGRSW01(config)#
ip access-list extended HTTP-TRAFFIC
permit tcp any any eq 80
class-map HTTP-TRAFFIC
match access-group name HTTP-TRAFFIC
policy-map MARKING
class HTTP-TRAFFIC
set dscp af12
ip http server
SW#telnet 192.168.1.1 80
Trying 192.168.1.1, 80 ... Open
Congestion Management
Every network device uses queuing. When a router receives an IP packet, it will check its routing
table, decides which interface to use to send the packet and then tries to send the packet. When
the interface is busy, the packet will be placed in a queue waiting for the interface to be free.
Above you can see that when the router receives a packet, it can perform one or more ingress
actions, perhaps an inbound access-list that filters packets. Once the router decides where to
forward the packet to, it might perform one or more egress actions. For example, NAT. The
packet is then placed in an output queue, waiting for the interface to be ready and then
transmitted.
In the picture above, we only have one output queue so all packets are treated on a first come
first served basis. It’s a FIFO (First In First Out) scheduler. There’s one queue and everyone
must wait in line.
Most network devices offer multiple output queues. Our picture then looks like this:
The router will use classification to decide which packets go into which queue.
Policing
Policing is often used by ISPs who must limit the bitrate of their customers.
Above we have a customer and ISP router connected using Gigabit Ethernet interfaces. These
interfaces run at 1000 Mbit. What if the customer only paid for a 200 Mbit connection? In this
case, the ISP will drop the traffic that exceeds 200 Mbit.
The dashed line is the bitrate the customer paid for. This is typically called the CIR (Committed
Information Rate). Without policing, the customer will be able to get a higher bitrate than what
they paid for. The ISP will configure inbound policing:
200 Mbit is now a hard limit which might not be completely fair to the customer. During a
longer period of time, it’s impossible to get an average bit rate of 200 Mbit. Because of this,
policing is often implemented so you are allowed to “burst” your traffic for a short while after
inactivity:
Above you can see that after a longer time of inactivity, we are allowed to exceed our CIR rate
of 200 Mbit for a short while before the policer kicks in.
Instead of dropping packets right away, policers can also be configured to re-mark your
packets to a lower priority. Your traffic won’t be dropped right away, but maybe somewhere
further down the network.
Shaping
In the previous example, you have seen how an ISP might use policing to drop your traffic. If the
customer has a CIR rate of 200 Mbit and they exceed this rate, their traffic gets dropped.
To prevent this from happening, we can implement shaping on the customer side. The shaper
will queue messages, delaying them to a certain CIR rate.
Without shaping, the bit rate that the customer sends might look like this:
Everything above the dashed line will then be dropped by the ISP’s policer. Once we configure
the shaper, our bit rate will look like this:
Everything is queued and delayed so we don’t exceed the 200 Mbit bitrate. This prevents our
traffic from getting dropped at the ISP.
The shaper solves one issue but it might introduce another issue.
Congestion Avoidance
To understand congestion avoidance, we first have to talk about TCP and its window size.
TCP uses flow control using a window size where the receiver tells the sender how many bytes
to send before expecting an acknowledgment. The higher the window size, the less overhead
and the higher the throughput will be.
TCP can use a Congestion Window (CWND) and receiver window (RWND) to control the transfer
rate and avoid network congestion. When there is no packet loss, the window size will increase,
doubling every time. Below you can see that hat H2 receives a single TCP segment which is
acknowledged.
H1 increases the window size and now we send two TCP segments before the ACK is returned:
The window size doubles again, and we send four TCP segments before the ACK is returned:
We keep doubling the window size until TCP segments get lost or when we hit the receiver’s
advertised window size (RWND). For each TCP segment that is lost, the TCP window size is
shrunk by half. If multiple TCP segments are lost, each time the window size is shrunk by half.
Now let’s take a closer look at queuing and I’ll explain how the TCP window size applies to
queuing. Here’s an example of an output queue:
The output queue above has four packets, there is still room for more packets. The interface is
quite busy so a bit later, more packets are queued:
Right now the queue is full so if another packet arrives, it will be dropped:
This is called tail drop. To deal with this, we can use a congestion avoidance tool like WRED. These
tools will monitor the output queue and once it’s at a certain level, it will drop TCP segments, hoping
that by reducing the window size, TCP connections will slow down so that we can reduce congestion
and prevent tail drop.
Here’s an illustration:
When the queue is empty, we don’t drop any packets. Once the queue fills and it’s between the
minimum and maximum threshold, we drop a small percentage of our packets. Once we exceed
the maximum threshold, we drop all packets.
The congestion avoidance tool can randomly drop packets, or we can configure it to give certain
packets a different treatment based on their marking.
IPv6
So what happened to IPv4? What went wrong? We have 32-bits which gives us 4,294,467,295 IP
addresses. Remember our Class A, B and C ranges? When the Internet started you would get a
Class A, B or C network. Class C gives you a block of 256 IP addresses, a class B is 65.535 IP
addresses and a class A even 16,777,216 IP addresses. Large companies like Apple, Microsoft,
IBM and such got one or more Class A networks. Did they really need > 16 million IP addresses?
Many IP addresses were just wasted.
We started using VLSM (Variable Length Subnet Mask) so we could use any subnet mask we like
and create smaller subnets, we longer had to use the class A, B or C networks. We also started
using NAT and PAT so we can have many private IP addresses behind a single public IP
addresses.
Nevertheless the Internet has grown in a way nobody expected 20 years ago. Despite all our
cool tricks like VLSM and NAT/PAT we really need more IP addresses and that’s why we need
IPv6.
What happened to IPv5? Good question…IP version 5 was used for an experimental project
called “Internet Stream Protocol”. It’s defined in a RFC if you are interested:
http://www.faqs.org/rfcs/rfc1819.html
IPv6 has 128 bit addresses and has a much larger address space than 32-bit IPv4 which offered
us a bit more than 4 billion addresses. Keep in mind every additional bit doubles the number of
IP addresses…so we go from 4 billion to 8 billion, 16,32,64, etc. Keep doubling until you reach
128 bit. With 128 bits this is the largest value you can create:
• 340,282,366,920,938,463,463,374,607,431,768,211,456
• 340- undecillion
• 282- decillion
• 366- nonillion
• 920- octillion
• 938- septillion
• 463- sextillion
• 463- quintillion
• 374- quadrillion
• 607- trillion
• 431- billion
• 768- million
• 211- thousand
• 456
That’s mind boggling… This gives us enough IP addresses for networks on earth, the moon, mars
and the rest of the universe. To put this in perspective let’s put the entire IPv6 and IPv4 address
space next to each other:
• IPv6: 340282366920938463463374607431768211456
• IPv4: 4294467295
Some other nice numbers: the entire IPv6 address space is 4294467295 times the size of
the complete IPv4 address space. Or if you like percentages, the entire IPv4 address space is only
0.000000000000000000000000001.26% of the entire IPv6 address space.
The main reason to start using IPv6 is that we need more addresses but it also offers some new
features:
• No Broadcast traffic: that’s right, we don’t use broadcasts anymore. We use multicast
instead. This means some protocols like ARP are replaced with other solutions.
• Stateless Autoconfiguration: this is like a “mini DHCP server”. Routers running IPv6 are
able to advertise the IPv6 prefix and gateway address to hosts so that they can
automatically configure themselves and get access outside of their own network.
• Address Renumbering: renumbering static IPv4 addresses on your network is a pain. If
you use stateless autoconfiguration for IPv6 then you can easily swap the current prefix
with another one.
• Mobility: IPv6 has built-in support for mobile devices. Hosts will be able to move from
one network to another and keep their current IPv6 address.
• No NAT / PAT: we have so much IPv6 addresses that we don’t need NAT or PAT
anymore, every device in your network can have a public IPv6 address.
• IPsec: IPv6 has native support for IPsec, you don’t have to use it but it’s built-in the
protocol.
• Improved header: the IPv6 header is simpler and doesn’t require checksums. It also has
a flow label that is used to quickly see if certain packets belong to the same flow or not.
• Migration Tools: IPv4 and IPv6 are not compatible so we need migration tools. There
are multiple tunneling techniques that we can use to transport IPv6 over IPv4 networks
(or the other way around). Running IPv4 and IPv6 simultaneously is called “dual stack”.
What does an IPv6 address look like? We use a different format than IPv4:
We don’t use decimal numbers like for IPv4, we are using hexadecimal now. Here’s an example
of an actual IPv6 address:
2041:1234:140F:1122:AB91:564F:875B:131B
Now imagine you have to call one of your users or colleagues and ask him or her to ping this
IPv6 address when you are trying to troubleshoot something…sounds like fun right?
2041:0000:140F:0000:0000:0000:875B:131B
To make our lives a bit better, IPv6 addresses can be shortened. Let’s take a look at some
examples and I’ll show you how it works:
• Original: 2041:0000:140F:0000:0000:0000:875B:131B
• Short: 2041:0000:140F::875B:131B
If there is a string of zeros then you can remove them once. In the example above I removed
the entire 0000:0000:0000 part. You can only do this once, your IPv6 device will fill up the
remaining space with zeros until it has a 128 bit address.
• Short: 2041:0000:140F::875B:131B
• Shorter: 2041:0:140F::875B:131B
If you have a “hextet” with 4 zeros then you can remove those and leave a single zero. Your IPv6
device will add the remaining 3 zeros.
Leading zeros can also be removed, here’s another address to demonstrate this:
• Original: 2001:0001:0002:0003:0004:0005:0006:0007
• Short: 2001:1:2:3:4:5:6:7
• An entire string of zeros can be removed, you can only do this once.
• 4 zeros can be removed, leaving only a single zero.
• Leading zeros can be removed.
2001:1111:2222:3333::/64
This is pretty much the same as using 192.168.1.1 /24. The number behind the / are the number
of bits that we use for the prefix. In the example above it means that 2001:1111:2222:3333 is
the prefix (64 bits) and everything behind it can be used for hosts.
When calculating subnets for IPv4 we can use the subnet mask to determine the network
address and for IPv6 we can do something alike. For any given IPv6 address we can calculate
what the prefix is but it works a bit different.
Let me show you what I’m talking about, here’s an IPv6 address that could be assigned to a host:
2001:1234:5678:1234:5678:ABCD:EF12:1234/64
What part from this IPv6 address is the prefix and what part identifies the host?
Since we use a /64 it means that the first 64 bits are the prefix. Each hexadecimal character
represents 4 binary bits so that means that this part is the prefix:
2001:1234:5678:1234
This part has 16 hexadecimal characters. 16 x 4 means 64 bits. So that’s the prefix right there.
The rest of the IPv6 address identifies the host:
5678:ABCD:EF12:1234
So we figured out that “2001:1234:5678:1234” is the prefix part but writing it down like this is
not correct. To write down the prefix correctly we need to add zeros at the end of this prefix so
that it is a 128 bit address again and add the prefix length:
2001:1234:5678:1234::/64
That’s the shortest way to write down the prefix. Let’s look at another example:
3211::1234:ABCD:5678:1010:CAFE/64
Before we can see what the prefix is, we should write down the complete address as this one
has been shortened (see the :: ). Just add the zeros until we have a full 128 bit address again:
3211:0000:0000:1234:ABCD:5678:1010:CAFE/64
We still have a prefix length of 64 bits. A single hexadecimal character represents 4 binary bits,
so the first 16 hexadecimal characters are the prefix:
3211:0000:0000:1234
Now we can add zeros at the end to make it a 128 bit address again and add the prefix length:
3211:0000:0000:1234::/64
3211:0:0:1234::/64
4 zeroes in a row can be replaced by a single one, so “3211:0:0:1234::/64” is the shortest we can
make this prefix.
Depending on the prefix length it makes the calculations very easy or (very) difficult. In the
examples I just showed you both prefixes had a length of 64. What if I had a prefix length of /53
or something?
Each hexadecimal character represents 4 binary bits. When your prefix length is a multiple of 16
then it’s easy to calculate because 16 binary bits represent 4 hexadecimal characters.
So with a prefix length of 64 we have 4 “blocks” with 4 hexadecimal characters each which
makes it easy to calculate. When the prefix length is a multiple of 4 then it’s still not too bad
because the boundary will be a single hexadecimal character.
When the prefix length is not a multiple of 16 or 4 it means we have to do some binary
calculations. Let me give you an example!
2001:1234:abcd:5678:9877:3322:5541:aabb/53
This is our IPv6 address and I would like to know the prefix for this address. Where do I start?
Somewhere in the blue block we will find the 53rd bit. To know what the prefix is we will have
to calculate those hexadecimal characters to binary:
We now have the block that contains the 53rd, this is where the boundary is between “prefix”
and “host”:
Now we will set the host bits to 0 so that only the prefix remains. Finally we calculate from
binary back to hexadecimal:
Put this block back into place and set all the other host bits to 0 as well:
We have now found our prefix! 2001:1234:abcd:5000::/53 is the answer. It’s not that bad to
calculate but you do have to get your hands dirty with binary…
We still have multicast, same idea but we use different addresses. There are also some reserved
addresses that are similar to their IPv4 counterparts.
Something new is anycast, an address that can be assigned on multiple devices so that packets
are always routed to the closest destination. Also, broadcast traffic doesn’t exist in IPv6
anymore.
In this lesson we’ll take a look at all the different address types and I’ll explain what they look
like and how we use them.
Unicast
Unicast IPv6 addresses are similar to unicast IPv4 addresses. These are meant to configure on
one interface so that you can send and receive IPv6 packets. There are a number of different
unicast address types that we’ll discuss here.
Global Unicast
The global unicast IPv6 addresses are similar to IPv4 public addresses. These addresses can be
used on the Internet. The big difference with IPv4 however, is that IPv6 has so much address
space that we can use global unicast addresses on any device in the network.
Unique Local
Unique local addresses work like the IPv4 private addresses. You can use these addresses on
your own network if you don’t intend to connect to the Internet or if you plan to use IPv6 NAT.
The advantage of unique local addresses is that you don’t need to register at an authority to get
some address space. The FC00::/7 prefix is reserved for unique local addresses, however when
you implement this you have to set the L-bit to 1 which means that the first two digits will be
FD. Here’s an example:
Let’s discuss all the fields of the unique local address. The first 7 bits indicate that we have a
unique local address. 1111 110 in binary is FC in hexadecimal. However, the L bit (8th bit) has to
be set to 1 so we end up with 1111 1101 which is FD in hexadecimal.
The global ID (40 bits) is something you can make up. Normally an ISP would choose a prefix but
now it’s up to you to think of something. What’s left is 16 bits that we can use for different
subnets. This gives us a 64-bit prefix, what’s left is 64 bits for the interface ID.
Let’s work on an example…let’s say that we have a LAN and we want to use unique local IPv6
addresses and we require 10 subnets:
FDAB:1234:5678:0000::/64 will be our first subnet. The other subnets could look like this:
• FDAB:1234:5678:0000::/64
• FDAB:1234:5678:0001::/64
• FDAB:1234:5678:0002::/64
• FDAB:1234:5678:0003::/64
• FDAB:1234:5678:0004::/64
• FDAB:1234:5678:0005::/64
• And so on…
If you are just messing around with IPv6 then you could use a simple global ID like 00:0000:0000
which is nice because you can shorten it to ::. For production networks, it’s better to pick
something that is truly unique. When you want to connect multiple sites that use unique local
addresses then you want to make sure you don’t have overlapping global IDs.
Link-Local
Link-local addresses are something new in IPv6. As the wording implies, these addresses only
work on the local link, we never route these addresses. These addresses are used to send and
receive IPv6 packets on a single subnet.
When you enable IPv6 on an interface then the device will automatically create a link-local
address. We use the link-local address for things like neighbor discovery (the replacement for
ARP) and as the next hop address for routes in your routing table. You will learn more about this
when you work through the static route and OSPFv3 lessons.
We use the FE80::/10 range for link-local addresses, this means that the first 10 bits are 1111
1110 10. Here’s what it looks like:
The first 10 bits are always 1111 1110 10 which means that we start with FE80. Technically the
following are all valid link-local addresses:
These link-local addresses however are automatically generated by the host which sets the 54
bits to zeroes. This means that normally you will only see link-local addresses that start with
FE80.
Site-Local
The site local range was originally meant to be the “private range” for IPv6. It has been
deprecated though and nowadays we use the unique local addresses instead. For these
addresses we used the FEC0::/10 range (1111 1110 11 in binary)
If you are interested why they gave up on the site local addresses then you can read RFC
3879 for the full story.
Unspecified
The 0:0:0:0:0:0:0:0 address is called the unspecified address, :: is the shortened version of this
address. It should never be configured on a host and is used to indicate that the host doesn’t
have any address.
Loopback
the 0:0:0:0:0:0:0:1 address is called the loopback address, the short version is ::1. IPv6 devices
can use this to send an IPv6 packet to themselves which is typically used for testing. It should
never be assigned to any physical interfaces. This address is the equivalent of IPv4’s 127.0.0.1
address.
Multicast
In IPv6 we use multicast for IPv6 (routing) protocols and for user traffic. We use the FF::/8 prefix
for multicast traffic (1111 1111 in binary). Let’s take a look what the addresses look like:
The first 8 bits indicates that we have a multicast address. The next 4 bits are used to set flags,
these are used for some special things like embedded RP. The scope bits are used to tell the
“scope” of this multicast traffic. You can use this to indicate that the multicast traffic should be
restricted to link-local, organization local or global (Internet).
Below you will find an overview with some of the most common IPv6 multicast addresses:
If you look closely you can see some of these addresses are similar to their IPv4 multicast
counterparts. For example, in IPv4 we use 224.0.0.05 and 224.0.0.6 for OSPF while we use
FF02::5 and FF02::6 for ipv6. We use 224.0.0.9 for RIPv2 and FF02::9 for RIPng.
Anycast
The anycast address is new in IPv6. The same address can be assigned to multiple devices and
advertised in a routing protocol. When you send a packet to an anycast address then it will be
delivered to the closest interface. Something similar is possible in IPv4 but it was never
“officially” possible. There is no specifix prefix for anycast addresses. Any unicast address that
you use on more than one device is suddenly an anycast address. The only difference is that you
have to configure the device and tell that the address will be used for anycast.
IANA “owns” the entire IPv6 address space and they assign certain prefixes to the RIRs (Regional
Internet Registry). There are 5 RIRs at the moment:
• AFRINIC: Africa
• APNIC: Asia/Pacific
• ARIN: North America
• LACNIC: Latin America and some Caribbean Islands
• RIPE NCC: Europe, Middle east and Central Asia
If you are interested, click here for an overview of all IPv6 prefix assignments by IANA.
When a large ISP (or large company) in North America wants IPv6 addresses then they will
contact ARIN who will assign them an IPv6 prefix if they meet all requirements. The ISP can then
assign prefixes to their customers.
• IANA is using the 2000::/3 prefix for global unicast address space.
• According to this list, RIPE NCC received prefix 2001:4000::/23 from IANA.
• A large ISP called Ziggo in The Netherlands receives prefix 2001:41f0::/32 from RIPE NCC.
• The ISP assigns prefix 2001:41f0:4060::/48 to one of their customers.
Now it’s up to the customer what they want to do with their IPv6 prefix…
Our customer received prefix 2001:41f0:4060::/48 and they want to use it to configure IPv6 on
their entire network. Where do we start? Take a look at the image below:
The 48-bit prefix that we received is typically called the global routing prefix or site prefix. The
interface ID is normally 64 bit which means we have 16 bits left to create subnets.
If I want I can steal some more bits from the Interface ID to create even more subnets but
there’s no need for this. Using 16 bits we can create 65.536 subnets …more than enough for
most of us. Let’s see what we can do for our customer:
16 bits gives us 4 hexadecimal characters. All possible combinations that we can create with
those 4 hexadecimal characters are our possible subnets. Everything from 0000 to FFFF are valid
subnets:
• 2001:41f0:4060:0000::/64
• 2001:41f0:4060:0001::/64
• 2001:41f0:4060:0002::/64
• 2001:41f0:4060:0003::/64
• 2001:41f0:4060:0004::/64
• 2001:41f0:4060:0005::/64
• 2001:41f0:4060:0006::/64
• 2001:41f0:4060:0007::/64
• 2001:41f0:4060:0008::/64
• 2001:41f0:4060:0009::/64
• 2001:41f0:4060:000A::/64
• 2001:41f0:4060:000B::/64
• 2001:41f0:4060:000C::/64
• 2001:41f0:4060:000D::/64
• 2001:41f0:4060:000E::/64
• 2001:41f0:4060:000F::/64
• 2001:41f0:4060:0010::/64
• 2001:41f0:4060:0011::/64
• 2001:41f0:4060:0012::/64
• 2001:41f0:4060:0013::/64
• 2001:41f0:4060:0014::/64
• And so on…
Now you know what subnets you can use, here’s an example of a small network where we use
some of these subnets:
In the example above I used some numbers some make sense, for example on VLAN 10 we use
2001:41f0:4060:10::/64, another good option would be 2001:41f0:4060:A::/64 since the A in
hexadecimal equals 10 in decimal. For the VLANs it’s best to use a /64 so that you can
use autoconfiguration for hosts.
Each subnet will require an IPv6 address on the router that will be used as the default gateway.
The most simple solution is probably to use the first IPv6 address in the subnet. For example for
VLAN 20 you could use 2001:41f0:4060:20::1/64 or for VLAN 2 you could use
2001:41f0:4060:2::1/64.
CONCLUSION:
• Both protocols use the concepts of a DHCP client, DHCP relay and DHCP server
• Both protocols use the concepts of scopes and leases
• Both protocols use a 4-message stateful exchange between client and server (DHCP for IPv4
uses Discover/Offer/Request/Acknowledge (DORA), andDHCPv6 uses
Solicit/Advertise/Request/Reply (SARR))
• Both protocols provide DHCP options to the end-node to provide additional information (but
DHCPv6 has a larger 16-bit option type code length)
• Both protocols support Rapid Commit functionality
Following is a list of the differences between DHCP for IPv6 and DHCPv6.
• DHCPv6 uses DHCP Unique Identifiers (DUIDs) (RFC 6355) whereas DHCP for IPv4 uses MAC
addresses to identify the client.
• Their message type names are different, but perform many of the same functions (DHCP
message types, DHCPv6 message types).
• Obviously, DHCP for IPv4 messages are transmitted over IPv4 packets and DHCPv6 is
transmitted over IPv6 packets.
• DHCPv6 uses ICMPv6 Router Advertisement (RA) and IPv6 multicast messages and DHCP
uses broadcast IPv4 messages on the LAN.
• DHCPv6 uses link-local IPv6 addresses when communicating between client and relay/server
(RFC 6939), and DHCP for IPv4 uses unsolicited broadcasts.
• DHCP for IPv4 and DHCPv6 UDP port numbers are different. DHCP servers and relay agents
listen on UDP port 67 and clients listen on UDP port 68, DHCPv6 clients listen on UDP port
546, DHCPv6 servers and relay agents listen on UDP port 547.
• DHCPv6 servers offer randomized interface identifiers (helps limit attacker reconnaissance),
DHCP offers the next IPv4 address from the scope/pool.
• DHCPv4 can be configured on a router, stateful DHCPv6 is not typically available on routers.
• DHCP for IPv4 can provide the default gateway IP address to the client, whereas DHCPv6
does not have this option; the IPv6 node learns about its first-hop router from the ICMPv6
RA message (Pending draft on this subject).
• DHCP for IPv4 scopes are susceptible to exhaustion; DHCPv6 scopes are typically /64s with
over 18 quintillion addresses so pool exhaustion is impossible.
Just to clarify, in this article, we are discussing stateful DHCPv6. We are not referring to what is
called “stateless DHCPv6” (RFC 3736) or “DHCPv6-Lite”. Stateless DHCPv6 is where the IPv6 client
uses Stateless Address Auto-Configuration (SLAAC) for its IPv6 address, but acquires DNS
information from the first-hop router to help the node obtain functional use of the network.
First “hop” might make you think about the first router but that’s not the case. These are all
switch features, in particular, the switch that sits between your end devices and the first router.
RA Guard (Route Advertisements): any device on the network can transmit router advertisements
and hosts don’t care where it comes from. They will happily accept anything. With RA guard, you can
filter router advertisements. You can create a simple policy where you only accept RAs on certain
interfaces or you can inspect RAs and permit them only when they match certain criteria.
DHCPv6 Guard: similar to DHCP snooping for IPv4. We inspect DHCP packets and only permit them
from trusted interfaces. You can also create policies where you only accept DHCP packets for certain
prefixes or preference levels.
Source Guard: the switch filters all packets where the source address is not found in the IPv6 binding
table. This helps against spoofing attacks where the source address is not found in the IPv6 binding
table.
The IPv6 RA guard feature can filter router advertisements and runs on switches. This can be as
simple as “don’t allow RAs on this interface” or complex with policies where router
advertisements are only permitted when it matches certain criteria.
• If you don’t use a policy then the default device role is “host” which means that all RAs
are blocked.
• Policies allow you to permit RAs but only when they match certain criteria, for example,
a specific prefix.
This feature inspects DHCPv6 messages between a DHCPv6 server and DHCPv6 client (or relay agent)
and blocks DHCPv6 reply and advertisements from (rogue) DHCPv6 servers. DHCPv6 messages from
clients or relay agents to a DHCPv6 server are not affected.
SNMP SNOOPING:
When DHCP servers are allocating IP addresses to the clients on the LAN, DHCP snooping can be
configured on LAN switches to prevent malicious or malformed DHCP traffic, or rogue DHCP
servers.
IPv6 ND Inspection
IPv6 ND Inspection is one of the IPv6 first-hop security features. It creates a binding table that is
based on NS (Neighbor Solicitation) and NA (Neighbor Advertisement) messages. The switch
then uses this table to check any future NS/NA messages. When the IPv6-LLA combination does
not match, it drops the message. This only applies to NS/NA messages, it doesn’t drop any
actual data packets that have a spoofed IPv6 or MAC address.
2. We insert “FFFE” in between the two pieces so that we have a 64 bit value.
3. We invert the 7th bit of the interface ID.
So if my MAC address would be 1234.5678.ABCD then this is what the interface ID will become:
Above you see how we split the MAC address and put FFFE in the middle. It doesn’t include the
final step which is “inverting the 7th” bit. To do this you have to convert the first two
hexadecimal characters of the first byte to binary, lookup the 7th bit and invert it. This means
that if it’s a 0 you need to make it a 1, and if it’s a 1 it has to become a 0.
The 7th bit represents the universal unique bit. A “built in” MAC address will always have this bit
set to 0. When you change the MAC address this bit has to be set to 1. Normally people don’t
change the MAC addresses of their interfaces which means that EUI-64 will change the 7th bit
from 0 to 1 most of the time. Here’s what it looks like:
We take the first two hexadecimal characters of the first byte which are “12” and convert those
back to binary. Then we invert the 7th bit from 1 to 0 and make it hexadecimal again. The EUI-64
interface ID will look like this:
Now you know how EUI-64 works, let’s see what it looks like on a router. I’ll use a Cisco IOS
router for this and use 2001:1234:5678:abcd::/64 as the prefix:
In this I configured the router with the IPv6 prefix and I used EUI-64 at the end. This is how we
can automatically generate the interface ID using the mac address. Now take a look at the IPv6
address that it created:
See the C000:18FF:FE5C:0 part above? That’s the MAC address that is split in 2, FFFE in the
middle and the “2” in “C200” of the MAC address has been inverted which is why it now shows
up as “C000”.
When you use EUI-64 on an interface that doesn’t have a MAC address then the router will
select the MAC address of the lowest numbered interface on the router.
I hope this has been useful to understand EUI-64
In this lesson, I’ll explain how to create IPv6 summaries and we’ll walk through some examples
together.
Example 1
• 2001:DB8:1234:ABA2::/64
• 2001:DB8:1234:ABC3::/64
Let’s say we have to create a summary that includes the two prefixes above. Each hextet
represents 16 bits. The first three hextets are the same (2001:DB8:1234) so we have 16 + 16 +
16 = 48 bits that are the same so far. To find the other bits that are the same we only have to
focus on the last hextet:
• ABA2
• ABC3
We’ll have to convert these from hexadecimal to binary to see how many bits are the same:
ABA2 1010101110100010
ABC3 1010101111000011
I highlighted the bits in red that are the same, the first 9 bits. The remaining blue bits are
different. To get our summary address, we have to zero out the blue bits:
AB80 1010101110000000
When we calculate this from binary back to hexadecimal we get AB80. The first three hextets
are the same and in the 4th octet we have 9 bits that are the same. 48 + 9 = 57 bits. Our
summary address will be:
2001:DB8:1234:AB80::/57
Example 2
• 2001:DB8:0:1::/64
• 2001:DB8:0:2::/64
• 2001:DB8:0:3::/64
And our goal is to create the most optimal summary address. The first three hextets are the
same so that’s 16 + 16 + 16 = 48 bits that these prefixes have in common. For the remaining bits,
we’ll have to look at the 4th hextet in binary:
0001 0000000000000001
0002 0000000000000010
0003 0000000000000011
Keep in mind that each hextet represents 16 bits. The first 14 bits are the same, to get the
summary address we have to zero out the blue bits:
0000 0000000000000000
When we calculate this from binary back to hexadecimal we get 0000. The first three hextets are
the same and in the 4th octet we have 14 bits that are the same. 48 + 14 = 62 bits. Our summary
address will be:
2001:DB8::/62
Example 3
• 2001:DB8:0:7::/64
• 2001:DB8:0:12::/64
Let’s see what the most optimal summary address is that has these two prefixes. The first three
hextets are the same so that’s 16 + 16 + 16 = 48 bits in common. Let’s look at the 4th hextet for
the remaining bits:
0007 0000000000000111
0012 0000000000010010
Be careful that you don’t accidently convert number 12 from decimal to binary. We are working
with hexadecimal values here! We have 11 bits that are the same, let’s zero out the remaining 5
bits:
0000 0000000000000000
We have 48 + 11 bits that are the same so our summary address will be:
2001:DB8::/59
All solicited node multicast group addresses start with FF02::1:FF /104:
Let’s take a look on a Cisco IOS router to see what these solicited node multicast group
addresses look like:
R1(config-if)#ipv6 enable
I just enabled IPv6 on an interface, this causes the router to create a link-local IPv6 address. It
will also compute and join the solicited node multicast group address:
FF02::1
FF02::1:FF8B:36D0
Above you can see that the router joined FF02::1:FF8B:36D0. The last 6 hexadecimal characters
were copied from the link local address. Here’s a picture:
Above you can see the complete uncompressed solicited node multicast address.
I can configure multiple IPv6 addresses on the interface, if the last 6 hexadecimal characters are
similar then there is no need to join another multicast address. For example, let’s configure an
IPv6 unicast address:
I’ll use EUI-64 to generate the last 64 bits. Take a look at the joined group addresses:
FF02::1
FF02::1:FF8B:36D0
The last 64 bits of the link local and unicast address are the same so the solicited node multicast
group address remains the same. If we configure an IPv6 address where the last 6 hexadecimal
characters are different then the router will join another multicast group. Let’s try that:
Instead of using EUI-64 I’ll use make up an address myself. The router will now join an additional
multicast group:
FF02::1
FF02::1:FF34:5678
FF02::1:FF8B:36D0
Above you can see the router also joined the FF02::1:FF34:5678 solicited node multicast group
address.
You have now seen that an IPv6 device computes and joins a solicited node multicast group
address for each IPv6 address that you configure.
ND uses ICMP and solicited node multicast addresses to discover the layer 2 address of other
IPv6 hosts the same network (local link). It uses two messages to accomplish this:
Using solicited node multicast addresses as the destination is far more efficient than IPv4’s ARP
requests that are broadcasted to all hosts.
Every IPV6 device will compute a solicited node multicast address by taking the multicast group
address (FF02::1:FF /104) and adding the last 6 hexadecimal characters from its IPv6 address. It
will then join this multicast group address and “listens” to it.
When one host wants to find the layer two address of another host, it will send the neighbor
solicitation to the remote host’s solicited node multicast address.It can calculate the solicited
node multicast address of the remote host since it knows about the multicast group address and
it knows the IPv6 address that it wants to reach.
The result will be that only the remote host will receive the neighbor solicitation. That’s far
more efficient than a broadcast that is received by everyone…
Once R1 receives the neighbor advertisement, these two IPv6 hosts will be able to communicate
with each other.
GNS3 LAB:
Now you have an idea how IPv6 neighbor discovery works. Let’s see what it looks like on some
real devices. I’ll also show you some wireshark captures. I will use these two routers for this
demonstration:
R1 & R2
(config-if)#ipv6 enable
Using ipv6 enable is enough to generate some link local addresses which is all we need for this
exercise. Here are the IPv6 addresses that the routers created:
To see the neighbor discovery in action I will enable a debug on both routers:
R1 & R2
#debug ipv6 nd
R1#ping FE80::C002:3FF:FEE4:0
!!!!!
R1#
First we see a line that includes INCMP, this indicates that the address resolution is in progress.
Next we see that R1 is sending the NS (neighbor solicitation) and receiving the NA (neighbor
advertisement). In the neighbor advertisement it finds the layer two address of R2
(c202.03e4.0000). The status jumps from INCMP to REACH since R1 now knows how to reach
R2. You can also see that R1 receives a neighbor solicitation from R2 and replies with the
neighbor advertisement. Here’s what it looks like on R2:
R2#
These debugs are interesting but they don’t show us the source and destination address that are
in use.
GNS3 LAB:
I’m going to use two routers to show you how stateless autoconfiguration works. R2 will have an
IPv6 address and is going to send router advertisements. R1 will use this to configure it’s own
IPv6 address.
R2(config)#ipv6 unicast-routing
Besides configuring an IPv6 address we have to use the ipv6 unicast-routing command to make
R2 act like a router. Remember this command since you need it for routing protocols as well.
We need to enable ipv6 address autoconfig on R1 to make sure it generates its own IPv6
address.
R1#debug ipv6 nd
R2#debug ipv6 nd
R2#
Here you can see R2 sending the router advertisement with the prefix.
R1#
This is R1 receiving the router advertisement and configuring its own IPv6 address.
FastEthernet0/0 [up/up]
FE80::CE09:18FF:FE0E:0
2001:1234::CE09:18FF:FE0E:0
And here is the proof that we have a fresh new IPv6 address on R1.
HomeAgentFlag=0, Preference=Medium
You can also use the show ipv6 routers command to see all cached router advertisements. This
is a good example where you will see the link-local address of R2 instead of the global unicast
address.
Not bad right? If we can do this why do we still care about DHCPv6? Don’t forget DHCP can do
many more things than just giving out IPv6 addresses like:
DHCP is of course also available for IPv6 and is called DHCPv6. The big difference between DHCP
for IPv4 and DHCPv6 is that we don’t use broadcast traffic anymore. When an IPv6 device is
looking for a DHCPv6 server it will send multicast packets to FF02::1:2. Routers will forward
these packets to DHCP servers.
In this tutorial we’ll take a look at DHCPv6 so we can automatically assign IPv6 addresses to our
hosts. The functionality of DHCPv6 is the same as DHCP for IPv4 but there are some differences.
First of all, DHCPv6 supports two different methods:
• Stateful configuration
• Stateless configuration (also known as SLAAC…StateLess AutoConfiguration)
The stateful version of DHCPv6 is pretty much the same as for IPv4. Our DHCPv6 server will
assign IPv6 addresses to all DHCPv6 clients and it will keep track of the bindings. In short, the
DHCPv6 servers knows exactly what IPv6 address has been assigned to what host.
Stateless works a bit different…the DHCPv6 server does not assign IPv6 addresses to the DHCPv6
clients, this is done through autoconfiguration. The DHCPv6 server is only used to assign
information that autoconfiguration doesn’t….stuff like a domain-name, multiple DNS servers
and all the other options that DHCP has to offer.
By default it uses normal mode, if you want the rapid mode you have to enable it on both the
DHCPv6 server and client.
You might be wondering why there is a normal and rapid mode, so did I…RFC 4039 says that the
rapid mode is useful in “high mobility” networks where clients come and go often. The overhead
of 4 messages might not be required so 2 messages is enough to do the job. If you have multiple
DHCPv6 servers (for redundancy) then you need to use the normal mode (4 messages). Seeing
the advantage of both modes might be fun for a tutorial in the future, for now…let’s start with
the basics and configure our DHCPv6 server!
GNS3 LAB
Our DHCPv6 router has two interfaces, the one connected to R1 will be used for stateful DHCPv6
and the interface connected to R2 will be used for stateless. You can also see the prefixes that I
will use.
Before you can do anything with IPv6, make sure that unicast routing is enabled:
DHCPV6(config)#ipv6 unicast-routing
DHCPV6(config-dhcpv6)#dns-server 2001:4860:4860::8888
DHCPV6(config-dhcpv6)#domain-name NETWORKJOURNEY.LOCAL
The pool is called “STATEFUL” and besides the prefix I configured a DNS server (that’s google
DNS) and a domain name. To activate this, we have to make some changes to the interface:
DHCPV6(config-if)#ipv6 nd managed-config-flag
On the interface you have to add the ipv6 dhcp server command and tell it what pool it has to
use. The ipv6 nd managed-config-flag sets a flag in the router advertisement that tells the hosts
that they could use DHCPv6. The last command that ends with no-autoconfig tells the hosts not
to use stateless configuration.
That’s all we have to do on the DHCPv6 server, let’s move on to the stateless configuration.
DHCPV6(config-dhcpv6)#dns-server 2001:4860:4860::8888
DHCPV6(config-dhcpv6)#domain-name NETWORKJOURNEY.LOCAL
As you can see I didn’t configure a prefix…I don’t have to since autoconfiguration will be used by
the client to fetch the prefix. Let’s enable it on the interface:
DHCPV6(config-if)#ipv6 nd other-config-flag
We use the same command to activate the pool on the interface but there is one extra item.
The ipv6 nd other-config-flag is required as it will inform clients through RA (Router
Advertisement) messages that they have to use DHCPv6 to receive extra information like the
domain name and DNS server after they used autoconfiguration.
That’s all we have to do on the server, you can view the DHCPv6 pools like this if you want:
Address allocation prefix: 2001:1111:1111:1111::/64 valid 172800 preferred 86400 (0 in use, 0 conflicts)
Active clients: 0
Active clients: 0
You can see both pools, our stateful pool with the prefix and the stateless pool without. Before I
configure the clients, I will enable a debug so we can see some of the messages in realtime:
R1 will be the stateful client and R2 is the stateless client, let’s do R1 first…
R1(config-if)#ipv6 enable
FastEthernet0/0 [up/up]
FE80::21D:A1FF:FE8B:36D0
2001:1111:1111:1111:255A:E159:32AF:5E42
That’s looking good, you can see that it has an IPv6 address with the 2001:1111:1111:1111::/64
prefix. There’s another nice command that shows us what else we received:
DUID: 000300010016C7BE0EC8
Preference: 0
Configuration parameters:
Address: 2001:1111:1111:1111:255A:E159:32AF:5E42/128
The show ipv6 dhcp interface command shows us what DNS and domain information we
received, this is looking good. Meanwhile you can see this on the server:
DHCPV6#
Above you can see the 4 messages (solicit, advertise, request and reply) because we are using
normal mode. Let’s switch the server and client to rapid mode so you can see the difference:
We have to change this on the interface level, same for the client:
DHCPV6#
2 messages instead of 4, that’s it…you now have seen the difference between normal and rapid
mode. Let’s move on to the stateless client!
R2(config-if)#ipv6 enable
This time I have to use the ipv6 address autoconfig command since we use autoconfiguration to
get an IPv6 address. Let’s see if that worked:
FastEthernet0/0 [up/up]
FE80::217:5AFF:FEED:7AF1
2001:2222:2222:2222:217:5AFF:FEED:7AF1
Great, we received an address. This is what the debug on the server looks like:
DHCPV6#
It receives an information request which basically means that the clients wants to know about
the “extra” stuff that the DHCPv6 pool has to offer. In our example that’s the DNS server and
the domain name. Let’s check if the client received those:
DUID: 000300010016C7BE0EC8
Preference: 0
Configuration parameters:
That’s good, it learned about the DNS server and the domain name. What does the pool look
like on the server?
Address allocation prefix: 2001:1111:1111:1111::/64 valid 172800 preferred 86400 (1 in use, 0 conflicts)
Active clients: 1
Active clients: 0
This is a good example as it shows you that the DHCPv6 servers sees an active client for the
stateful pool but not for the stateless pool.
Just like with IPv4, it is possible to use an interface as the next hop. This will only work
with point-to-point interfaces:
S 2001:DB8:2:2::/64 [1/0]
R1#ping 2001:DB8:2:2::2
!!!!!
If you try this with a FastEthernet interface, you’ll see that the router will accept the command
but the ping won’t work. You can’t use this for multi-access interfaces.
Static route for a prefix – global unicast next hop
Instead of an outgoing interface, we can also specify the global unicast address as the next hop:
S 2001:DB8:2:2::/64 [1/0]
via 2001:DB8:12:12::2
R1#ping 2001:DB8:2:2::2
!!!!!
No problem at all…
Instead of global unicast addresses, you can also use unique local addresses. These are the IPv6
equivalent of IPv4 private addresses.
One of the differences between IPv4 and IPv6 is that IPv6 generates a link-local address for each
interface. In fact, these link-local addresses are also used by routing protocols like RIPng, EIGRP,
OSPFv3, etc as the next hop addresses. Let’s see what the link-local address is of R2:
Let’s use this as the next hop address. When you use a global unicast address as the next hop,
your router will be able to look at the routing table and figure out what outgoing interface to
use to reach this global unicast address. With link local addresses, the router has no clue which
outgoing interface to use so you will have to specify both the outgoing interface and the link
local address:
S 2001:DB8:2:2::/64 [1/0]
R1#ping 2001:DB8:2:2::2
!!!!!
No problems there.
S ::/0 [1/0]
R1#ping 2001:DB8:2:2::2
!!!!!
Instead of an outgoing interface, let’s try a global unicast next hop address:
S ::/0 [1/0]
via 2001:DB8:12:12::2
R1#ping 2001:DB8:2:2::2
!!!!!
Let’s replace the global unicast next hop address with a link-local address:
S ::/0 [1/0]
R1#ping 2001:DB8:2:2::2
!!!!!
S 2001:DB8:2:2::2/128 [1/0]
R1#ping 2001:DB8:2:2::2
!!!!!
S 2001:DB8:2:2::2/128 [1/0]
via 2001:DB8:12:12::2
R1#ping 2001:DB8:2:2::2
!!!!!
Last but not least, a link-local address as the next hop address:
S 2001:DB8:2:2::2/128 [1/0]
R1#ping 2001:DB8:2:2::2
!!!!!
Here’s the static route that is used to use R3 as the primary path:
Let’s try the outgoing interface first. The static route looks like this:
Note that at the end of the line above, I specified the administrative distance with a value of 2.
With both interfaces up, R1 will send all traffic to R3:
S 2001:DB8:23:23::/64 [1/0]
via 2001:DB8:13:13::3
Above you can see that the default administrative distance is 1. Let’s shut the FastEthernet 0/0
interface to test our static floating route:
R1(config-if)#shutdown
S 2001:DB8:2:2::/64 [2/0]
The entry to R2 is now installed. You can also see the administrative distance value of two in the
routing table.
Instead of the outgoing interface, we can also use a global unicast address as the next hop:
S 2001:DB8:2:2::/64 [2/0]
via 2001:DB8:12:12::2
S 2001:DB8:2:2::/64 [2/0]
Note that I don’t have any global unicast IPv6 addresses on the FastEthernet interface because
the EIGRP updates will be sent using the link-local addresses.
Configuration
R1 & R2
(config)#ipv6 unicast-routing
R1 & R2
(config-if)#ipv6 enable
R1(config)#interface loopback 0
R2(config)#interface loopback 0
Enabling IPv6 on the Gigabit interfaces will generate an IPv6 link local address. The loopback
interfaces will have a global unicast address. Let’s verify our work:
GigabitEthernet0/1 [up/up]
FE80::F816:3EFF:FE7B:61CA
Loopback0 [up/up]
FE80::F816:3EFF:FEC5:1BD7
2001::1
GigabitEthernet0/1 [up/up]
FE80::F816:3EFF:FE8F:4F66
Loopback0 [up/up]
FE80::F816:3EFF:FED1:4100
2001::2
After configuring the IPv6 addresses on the loopback interface you can see the global unicast
and the link-local IPv6 addresses.
R1(config-rtr)#router-id 1.1.1.1
R1(config-rtr)#no shutdown
R1(config-if)#ipv6 eigrp 1
R1(config)#interface loopback 0
R1(config-if)#ipv6 eigrp 1
R2(config-rtr)#router-id 2.2.2.2
R2(config-rtr)#no shutdown
R2(config-if)#ipv6 eigrp 1
R2(config)#interface loopback 0
R2(config-if)#ipv6 eigrp 1
First, you need to start EIGRP with the ipv6 router eigrp command. The number you see is the
autonomous system number and it has to match on both routers. Each EIGRP router needs
a router ID which is the highest IPv4 address on the router.
If you don’t have any IPv4 addresses you need to specify it yourself with the router-
id command. By default, the EIGRP process is in shutdown mode and you need to type no
shutdown to activate it.
Last step is to enable it on the interfaces with the ipv6 eigrp command. Let’s verify our
configuration:
FE80::F816:3EFF:FE8F:4F66
FE80::F816:3EFF:FE7B:61CA
OE2 - OSPF ext 2, ON1 - OSPF NSSA ext 1, ON2 - OSPF NSSA ext 2
a - Application
D 2001::2/128 [90/130816]
OE2 - OSPF ext 2, ON1 - OSPF NSSA ext 1, ON2 - OSPF NSSA ext 2
a - Application
D 2001::1/128 [90/130816]
Here we go…we have an EIGRP prefix in the routing table. That’s all there is to it!
OSPFv2 vs OSPFv3
OSPFv2 and OSPFv3 are very similar. OSPFv3 still establishes neighbor adjacencies, has areas,
different network types, the same metrics, runs SPF, etc. There are however some differences.
OSPFv2 runs on top of IPv4 and since OSPFv3 runs on IPv6, some changes had to be made.
• Link-local addresses: OSPFv3 packets are sourced from link-local IPv6 addresses.
• Links, not networks: OSPFv3 uses the terminology links where we use networks in
OSPFv2.
• New LSA types: there are two new LSA types, and LSA type 1 and 2 have changed.
• Interface commands: OSPFv3 uses interface commands to enable it on the interface, we
don’t use the network command anymore as OSPFv2 does.
• OSPFv3 router ID: OSPFv3 is unable to set its own router ID like OSPFv2 does. Instead,
you have to manually configure the router ID. It is configured as a 32-bit value, same as
in OSPFv2.
• Multiple prefixes per interface: if you have multiple IPv6 prefixes on an interface then
OSPFv3 will advertise all of them.
• Flooding scope: OSPFv3 has a flooding scope for different LSAs.
• Multiple instances per link: You can run multiple OSPFv3 instances on a single link.
• Authentication: OSPFv3 doesn’t use plain text or MD5 authentication as OSPFv2 does.
Instead, it uses IPv6’s IPSec authentication.
• Prefixes in LSAs: OSPFv2 shows networks in LSAs as network + subnet mask, OSPFv3
shows prefixes as prefix + prefix length.
LSA Types
OSPFv3 has two new LSAs, and some of the LSA types have been renamed. Here is an overview
of all OSPFv2 and OSPFv3 LSA types:
OSPFv3 OSPFv2
The LSA types are still the same except type 3 is now called the Inter-Area Prefix LSA and type 4
is called the Inter-Area router LSA. The last two types, the link LSA, and intra-area prefix LSA are
new to OSPFv3.
In OSPFv2, type 1 and type 2 LSAs are used for topology and network information. A single LSA
contains information about the topology and the networks that are used.
If you make a simple change, like changing the IP address on one of your routers then the
topology itself doesn’t change. In OSPFv2, a new type 1 LSA and perhaps a type 2 LSA have to be
flooded. Other routers that receive the new LSA(s) have to recalculate the SPT even though the
topology did not change.
In OSPFv3, they changed this by creating a separation between prefixes and the SPF tree. There
is no prefix information in LSA type 1 and 2, you only find topology adjacencies in these LSAs,
you don’t find any IPv6 prefixes in them. Prefixes are now advertised in type 9 LSAs and the link-
local addresses that are used for next hops are advertised in type 8 LSAs. Type 8 LSAs are only
flooded on the local link, type 9 LSAs are flooded within the area. The designers of OSPFv3 could
have included link-local addresses in type 9 LSAs but since these are only required on the local
link, it would be a waste of resources.
By separating the SPF tree and prefixes, OSPFv3 is more efficient. When the link-local address on
an interface changes, the router only has to flood an updated link LSA and intra-area-prefix LSA.
Since there are no changes to the topology, we don’t have to flood type 1 and 2 LSA(s). Other
routers won’t have to run SPF in this case.
Flooding Scope
In the table with LSA types above, you can see that the LSA types of OSPFv3 are hexadecimal
values. The first part defines the flooding scope of the LSA:
• 0x0: the link-local scope that is used for the Link LSA, a new LSA type for OSPFv3.
• 0x2: the area scope, used for LSAs that are flooded throughout a single area. This is used
for router, network, inter-area prefixes, inter-area router and intra-area prefix LSA
types.
• 0x4: the AS scope, used for LSAs that are flooded within the OSPFv3 routing domain,
used for external LSAs.
Headers
OSPFv2 and OSPFv3 use different headers. Here are some of the differences:
Instance ID Yes No
The instance ID is a new field that can be used to run multiple OSPFv3 instances on a single link.
OSPFv3 routers will only become neighbors if the instance ID is the same, which is 0 by default.
This allows you to run OSPFv3 on a broadcast network and only form neighbor adjacencies with
specific neighbors that use the same instance ID.
When we use OSPF for IPv4 we are using OSPFv2. OSPF has been updated for IPv6 and is now
called OSPFv3. These are two different routing protocols and in this lesson I’ll show you how to
configure OSPFv3 so that you can route IPv6 traffic. Here’s the topology we’ll use:
Let’s start with the configuration of the interfaces and the IPv6 addresses. We don’t have to
configure any global unicast IPv6 addresses on the FastEthernet interfaces because OSPFv3 uses
link-local addresses for the neighbor adjacency and sending LSAs.
R1(config)#ipv6 unicast-routing
R1(config)#interface loopback 0
R2(config)#ipv6 unicast-routing
R2(config)#interface loopback 0
Don’t forget to enable IPv6 unicast routing otherwise no routing protocol will work for IPv6.
FastEthernet0/0 [up/up]
Loopback0 [up/up]
FE80::CE09:18FF:FE0E:0
2001::1
FastEthernet0/0 [up/up]
Loopback0 [up/up]
FE80::CE0A:18FF:FE0E:0
2001::2
After configuring the IPv6 addresses on the loopback interface you can see the global unicast
and the link-local IPv6 addresses. There is no link-local address on the FastEthernet interfaces
however so we’ll have to fix this:
R1(config-if)#ipv6 enable
R2(config-if)#ipv6 enable
FastEthernet0/0 [up/up]
FE80::CE09:18FF:FE0E:0
Loopback0 [up/up]
FE80::CE09:18FF:FE0E:0
2001::1
FastEthernet0/0 [up/up]
FE80::CE0A:18FF:FE0E:0
Loopback0 [up/up]
FE80::CE0A:18FF:FE0E:0
2001::2
R1(config-rtr)#router-id 1.1.1.1
R1(config-rtr)#exit
R1(config-if)#exit
R1(config)#interface loopback 0
R2(config-rtr)#router-id 2.2.2.2
R2(config-rtr)#exit
R2(config-if)#exit
R2(config)#interface loopback 0
Just like OSPFv2 you need to start a process and specify a process ID. For OSPFv3 we have to use
the ipv6 router ospf command. Just like EIGRP for IPv6 we need a router-ID if we don’t have any
IPv4 addresses configured on our router. Finally go to the interface and use the ipv6 ospf
area command to enable OSPFv3 and select the correct area.
Use show ipv6 ospf neighbor to see your neighbors. It’s funny to see the old IPv4 neighbor ID
even though OSPFv3 is IPv6-only.
O - OSPF intra, OI - OSPF inter, OE1 - OSPF ext 1, OE2 - OSPF ext 2
O 2001::2/128 [110/10]
O - OSPF intra, OI - OSPF inter, OE1 - OSPF ext 1, OE2 - OSPF ext 2
O 2001::1/128 [110/10]
In our routing table we find the fresh OSPFv3 route. That’s it! This is a fairly simple example but
it should help you to get going with OSPFv3 for IPv6.
• IPv4 unicast
• IPv4 multicast
• IPv6 unicast
• IPv6 multicast
MP-BGP is also used for MPLS VPN where we use MP-BGP to exchange the VPN labels. For each
different “address” type, MP-BGP uses a different address family.
To allow these new addresses, MBGP has some new features that the old BGP doesn’t have:
Since MP-BGP supports IPv4 and IPv6 we have a couple of options. MP-BGP routers can become
neighbors using IPv4 addresses and exchange IPv6 prefixes or the other way around. Let’s take a
look at some configuration examples…
GNS3 LAB
R1(config)#router bgp 1
R1(config-router)#address-family ipv4
R1(config-router-af)#exit
R1(config-router)#address-family ipv6
R1(config-router-af)#network 2001:db8::1/128
In the configuration above we first specify the remote neighbor. The address-family command is
used to change the IPv4 or IPv6 settings. I disable the IPv4 address-family and enabled IPv6. Last
but not least, we advertised the prefix on the loopback interface. The configuration of R2 looks
similar:
R2(config)#router bgp 2
R2(config-router)#address-family ipv4
R2(config-router-af)#exit
R2(config-router)#address-family ipv6
R2(config-router-af)#network 2001:db8::2/128
R1#
l - LISP
O - OSPF Intra, OI - OSPF Inter, OE1 - OSPF ext 1, OE2 - OSPF ext 2
B 2001:DB8::2/128 [20/0]
l - LISP
O - OSPF Intra, OI - OSPF Inter, OE1 - OSPF ext 1, OE2 - OSPF ext 2
B 2001:DB8::1/128 [20/0]
The routers learned each others prefixes…great! This example was pretty straight-forward but
you have now learned how MP-BGP uses different address families.
R1(config)#router bgp 1
R2(config)#router bgp 2
R1(config)#router bgp 1
R1(config-router)#address-family ipv6
R1(config-router-af)#network 2001:db8::1/128
R2(config)#router bgp 2
R2(config-router)#address-family ipv6
R2(config-router-af)#network 2001:db8::2/128
Once we enter the address-family IPv6 configuration there are two things we have to configure.
The prefix has to be advertised and we need to specify the neighbor. The prefixes on the
loopback interface should now be advertised. Let’s check it out:
* 2001:DB8::2/128 ::FFFF:192.168.12.2
0 02i
* 2001:DB8::1/128 ::FFFF:192.168.12.1
0 01i
As you can see the routers have learned about each others prefixes. There’s one problem
though…we were able to exchange IPv6 prefixes but we only use IPv4 between R1 and R2, there
is no valid next hop address that we can use.
To fix this, we need to use some IPv6 addresses that we can use as the next hop. We’ll have to
configure a prefix between R1 and R2 for this:
Now we have IPv6 addresses that we can use as the next hop. We are using IPv4 for the
neighbor peering so the next hop doesn’t change automatically. We’ll have to use a route-map
for this:
R1(config)#router bgp 1
R1(config-router)#address-family ipv6
R2(config)#router bgp 2
R2(config-router)#address-family ipv6
Both routers now change the next hop IPv6 address of incoming prefixes. Let’s reset BGP:
R1#clear ip bgp *
The next hop IPv6 addresses are now reachable so they can be installed in the routing table. The
downside of this solution is that we had to fix the next hop ourselves, the advantage however is
that we have a single BGP neighbor adjacency that can be used for the exchange of IPv4 and
IPv6 prefixes.
As explained in my first tutorial that introduces access-lists, we can use access-lists for
filtering (blocking packets) or selecting traffic (for VPNs, NAT, etc).
This also applies to IPv6 access-lists which are very similar to IPv4 access-lists. There are
two important differences however:
• IPv4 access-lists can be standard or extended, numbered or named. IPv6 only has
named extended access-lists.
• IPv4 access-lists have an invisible implicit deny any at the bottom of every access-
list. IPv6 access-lists have three invisible statements at the bottom:
o permit icmp any any nd-na
o permit icmp any any nd-ns
o deny ipv6 any any
The two permit statements are required for neighbor discovery which is an important
protocol in IPv6, it’s the replacement for ARP.
GNS3 LAB
I’ll use subnet 2001:DB8:0:12::/64 in between R1 and R2. To demonstrate the access-list,
I’ll create one inbound on R2 and we will try to filter some packets from R1. Let’s take a
look at the access-list:
R2(config)#ipv6 access-list ?
As you can see above the only option is the named access-list. There’s also no option for
standard or extended access-list. Let’s create that access-list:
I’ll call it “R1_TRAFFIC”. Here are our options when we create a statement:
R2(config-ipv6-acl)#permit ?
This is similar to IPv4 access-lists. You can pick any protocol you like. Let’s see if we can
permit telnet traffic from R1 and deny everything else:
R2(config-ipv6-acl)#permit tcp ?
After specifying the source IP I also have to select the destination IP, let’s do that:
This should permit telnet traffic from R1. Let’s take a look at our access-list:
R2#show access-lists
Above you see our statement. One cosmetic difference with IPv4 access-lists is that the
sequence number is behind the statement. Let’s apply this access-list on the interface:
Instead of using the access-group command you have to use the ipv6 traffic-
filter command. Let’s see if it works:
R1#telnet 2001:db8:0:12::2
R1 is able to telnet to R2. Let’s see if we find any matches on our access-list:
R2#show access-lists
There we go, we see it matches the access-list. Anything else should be dropped…let’s
try a simple ping:
R1#ping 2001:db8:0:12::2
AAAAA
The AAAAAs that you see above indicate that the destination is administratively
unreachable, it means that an access-list is dropping our packets.
Usually, this output indicates that an access list is blocking traffic. For security reasons it
might be a bad idea to tell someone that traffic has been dropped. If you want you can
disable this:
Use the no ipv6 unreachables command to disable this. When we send another ping
now you will see this:
R1#ping 2001:db8:0:12::2
.....
R2 is no longer informing R1 that the packets have been dropped. That’s all I have for
now, have fun configuring IPv6 access-lists.
SDWAN
Traditional WAN:
BENEFITS OF SD-WAN
Application-aware routing tracks network and path characteristics of the data plane tunnels
between Cisco SD-WAN devices and uses the collected information to compute optimal paths for data
traffic. These characteristics include packet loss, latency, and jitter, and the load, cost and bandwidth of a
link. The ability to consider factors in path selection other than those used by standard routing
protocols—such as route prefixes, metrics, link-state information, and route removal on the Cisco SD-
WAN device—offers a number of advantages to an enterprise:
• In normal network operation, the path taken by application data traffic through the network can
be optimized, by directing it to WAN links that support the required levels of packet loss, latency,
and jitter defined in an application’s SLA.
• In the face of network brownouts or soft failures, performance degradation can be minimized.
The tracking of network and path conditions by application-aware routing in real time can quickly
reveal performance issues, and it automatically activates strategies that redirect data traffic to
the best available path. As the network recovers from the soft failure conditions, application-
aware routing automatically readjusts the data traffic paths.
• Network costs can be reduced because data traffic can be more efficiently load-balanced.
• Application performance can be increased without the need for WAN upgrades.
The potential to use the same infrastructure and resources is what gives multi-tenant an advantage
to optimise maintenance and utilisation gets automated.
• To provide the highest level of security, only authenticated and authorized routers can access
and participation in the Cisco SD-WAN overlay network. To this end, the Cisco vSmart
Controller performs automatic authentication on all the routers before they can send data traffic
over the network.
• After the routers are authenticated, data traffic flows, regardless of whether the routers are in a
private address space (behind a NAT gateway) or in a public address space.
REFERENCE:
https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/sdwan-xe-gs-
book/system-overview.html
https://www.cisco.com/c/en/us/td/docs/solutions/CVD/SDWAN/cisco-sdwan-design-guide.html