Community College LAN Design Considerations
Community College LAN Design Considerations
Community College LAN Design Considerations
LAN Design
The community college LAN design is a multi-campus design, where a campus consists of multiple
buildings and services at each location, as shown in Figure 3-1.
Services
Block
Data
Center
Internet
Main Campus Edge
Services Services
Block Block
Services
Block
Data Data
Center Center
Data
Center
Large Building Medium Building Small Building Medium Building Small Building
228468
Figure 3-2 shows the service fabric design model used in the community college LAN design.
Unified
Mobility Security
Communications
228469
This chapter focuses on the LAN component of the overall design. The LAN component consists of the
LAN framework and network foundation technologies that provide baseline routing and switching
guidelines. The LAN design interconnects several other components, such as endpoints, data center,
WAN, and so on, to provide a foundation on which mobility, security, and unified communications (UC)
can be integrated into the overall design.
This LAN design provides guidance on building the next-generation community college network, which
becomes a common framework along with critical network technologies to deliver the foundation for the
service fabric design. This chapter is divided into following sections:
• LAN design principles—Provides proven design choices to build various types of LANs.
• LAN design model for the community college—Leverages the design principles of the tiered network
design to facilitate a geographically dispersed college campus network made up of various elements,
including networking role, size, capacity, and infrastructure demands.
• Considerations of a multi-tier LAN design model for community colleges—Provides guidance for the
college campus LAN network as a platform with a wide range of next-generation products and
technologies to integrate applications and solutions seamlessly.
• Designing network foundation services for LAN designs in community colleges—Provides guidance
on deploying various types of Cisco IOS technologies to build a simplified and highly available
network design to provide continuous network operation. This section also provides guidance on
designing network-differentiated services that can be used to customize the allocation of network
resources to improve user experience and application performance, and to protect the network
against unmanaged devices and applications.
Three-Tier Two-Tier
LAN Design LAN Design
Core
Distribution Collapsed
Core/Distribution
228470
Access Access
The key layers are access, distribution and core. Each layer can be seen as a well-defined structured
module with specific roles and functions in the LAN network. Introducing modularity in the LAN
hierarchical design further ensures that the LAN network remains resilient and flexible to provide
critical network services as well as to allow for growth and changes that may occur in a community
college.
• Access layer
The access layer represents the network edge, where traffic enters or exits the campus network.
Traditionally, the primary function of an access layer switch is to provide network access to the user.
Access layer switches connect to the distribution layer switches to perform network foundation
technologies such as routing, quality of service (QoS), and security.
To meet network application and end-user demands, the next-generation Cisco Catalyst switching
platforms no longer simply switch packets, but now provide intelligent services to various types of
endpoints at the network edge. Building intelligence into access layer switches allows them to
operate more efficiently, optimally, and securely.
• Distribution layer
The distribution layer interfaces between the access layer and the core layer to provide many key
functions, such as the following:
– Aggregating and terminating Layer 2 broadcast domains
– Aggregating Layer 3 routing boundaries
– Providing intelligent switching, routing, and network access policy functions to access the rest
of the network
– Providing high availability through redundant distribution layer switches to the end-user and
equal cost paths to the core, as well as providing differentiated services to various classes of
service applications at the edge of network
• Core layer
The core layer is the network backbone that connects all the layers of the LAN design, providing for
connectivity between end devices, computing and data storage services located within the data
center and other areas, and services within the network. The core layer serves as the aggregator for
all the other campus blocks, and ties the campus together with the rest of the network.
Note For more information on each of these layers, see the enterprise class network framework at the
following URL: http://www.cisco.com/en/US/docs/solutions/Enterprise/Campus/campover.html.
Figure 3-4 shows a sample three-tier LAN network design for community colleges where the access,
distribution, and core are all separate layers. To build a simplified, cost-effective, and efficient physical
cable layout design, Cisco recommends building an extended-star physical network topology from a
centralized building location to all other buildings on the same campus.
Building B – Building C –
Library and Life Science
Communication Center Learning Recource Center
Access
Distribution
Building A –
Core Administration and
Data Center
Distribution
Access
228471
Social Science Arts and History and
and Health Technology Geography
Access
Floor 6 –
Social Science and Health
Floor 5 –
Arts and Technology
WAN
Floor 4 –
History and Geography
PSTN
Floor 3 –
Library and Communication Center Collapsed
Distribution/
Core
228472
Floor 2 –
Administration and Data Center
If using the small-scale collapsed campus core design, the college network architect must understand the
network and application demands so that this design ensures a hierarchical, modular, resilient, and
flexible LAN network.
HDTV
IP
CUCM/Unitity Core
ACS/CSA-MC Cisco 6500 VSS DMZ
Service
NAC Mgr DC
Block
WCS
VSOM/VSMS WAE
DMM/CVP ACNS ESA
WLC Web/Email
DHCP/DNS
NTTP/FTP NAC
www WSA
NTP Internet Edge
GigaPOP
MetroE HDLC PSTN
Internet NLR
HDLC
Cisco 6500
VSS Cisco 4500 Cisco 4500 Cisco 4500 Cisco 4500 Cisco 4500
Large Building Medium Building Small Building Medium Building Small Building Small Building
228473
HDTV HDTV
IP IP
Depending on the number of available academic programs in a remote campus, the student, faculty, and
staff population in remote campuses may be equal to or less than the main college campus site. Campus
network designs for the remote campus may require adjusting based on overall college campus capacity.
Using high-speed WAN technology, all the remote community college campuses interconnect to a
centralized main college campus that provides shared services to all the students, faculty, and staff,
independent of their physical location. The WAN design is discussed in greater detail in the next chapter,
but it is worth mentioning in the LAN section because some remote sites may integrate LAN and WAN
functionality into a single platform. Collapsing the LAN and WAN functionality into a single Cisco
platform can provide all the needed requirements for a particular remote site as well as provide reduced
cost to the overall design, as discussed in more detail in the following section.
Table 3-1 shows a summary of the LAN design models as they are applied in the overall community
college network design.
Access
Distribution
Data Center
Block
Core
DMZ
Service
Block
Internet NLR
228474
The main college campus typically consists of various sizes of building facilities and various education
department groups. The network scale factor in the main college campus site is higher than the remote
college campus site, and includes end users, IP-enabled endpoints, servers, and security and network
edge devices. Multiple buildings of various sizes exist in one location, as shown in Figure 3-8.
Access
Distribution
Data Center
Block
Core
Service
Block
WAN PSTN
Edge QFP Gateway
228475
WAN PSTN
The three-tier LAN design model for the main college campus meets all key technical aspects to provide
a well-structured and strong network foundation. The modularity and flexibility in a three-tier LAN
design model allows easier expansion and integration in the main college network, and keeps all network
elements protected and available.
To enforce external network access policy for each end user, the three-tier model also provides external
gateway services to the students and staff for accessing the Internet as well as private education and
research networks.
Note The WAN design is a separate element in this location, because it requires a separate WAN device that
connects to the three-tier LAN model. WAN design is discussed in more detail in Chapter 4,
“Community College WAN Design Considerations.”
Similar to the main college campus, Cisco recommends the three-tier LAN design model for the remote
large college campus, as shown in Figure 3-9.
Medium Small
Building Building
Access
Distribution
Data Center
Block
Core
Service
Block
WAN PSTN
Edge Gateway
228476
WAN PSTN
Access
Data Center
Block
Distribution/Core
Service
Block
WAN PSTN
Edge Gateway
228477
WAN PSTN
Medium Small
Building Building
Access
Distribution
Data Center
Block
Core
Service
Block
WAN PSTN
Edge Gateway
228476
WAN PSTN
228478
Cisco Catalyst 6500 Cisco Catalyst 4500 Cisco Catalyst 4500
Each design model offers consistent network services, high availability, expansion flexibility, and
network scalability. The following sections provide detailed design and deployment guidance for each
model as well as where they fit within the various locations of the community college design.
VSL
228479
To provide end-to-end network access, the core layer interconnects several other network systems that
are implemented in different roles and service blocks. Using VSS to virtualize the core layer into a single
logical system remains transparent to each network device that interconnects to the VSS-enabled core.
The single logical connection between core and the peer network devices builds a reliable, point-to-point
connection that develops a simplified network topology and builds distributed forwarding tables to fully
use all resources. Figure 3-14 shows a reference VSS-enabled core network design for the main campus
site.
Access
Distribution
Service
Block
WAN PSTN
Edge QFP Gateway QFP
Internet NLR
228480
Note For more detailed VSS design guidance, see the Campus 3.0 Virtual Switching System Design Guide at
the following URL:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Campus/VSS30dg/campusVSS_DG.html.
Core Layer Design Option 2—Cisco Catalyst 4500-Based Campus Core Network
Core layer design option 2 is intended for a remote medium-sized college campus and is built on the
same principles as for the main and remote large campus locations. The size of this remote site may not
be large, and it is assumed that this location contains distributed building premises within the remote
medium campus design. Because this site is smaller in comparison to the main and remote large campus
locations, a fully redundant, VSS-based core layer design may not be necessary. Therefore, core layer
design option 2 was developed to provide a cost-effective alternative while providing the same
functionality as core layer design option 1. Figure 3-15 shows the remote medium campus core design
option in more detail.
Medium Small
Building Building
Access
Distribution
Data Center
Block
Core
Service
Block
WAN PSTN
Edge Gateway
228481
WAN PSTN
The cost of implementing and managing redundant systems in each tier may introduce complications in
selecting the three-tier model, especially when network scale factor is not too high. This cost-effective
core network design provides protection against various types of hardware and software failure and
offers sub-second network recovery. Instead of a redundant node in the same tier, a single
Cisco Catalyst 4500-E Series Switch can be deployed in the core role and bundled with 1+1 redundant
in-chassis network components. The Cisco Catalyst 4500-E Series modular platform is a one-size
platform that helps enable the high-speed core backbone to provide uninterrupted network access within
a single chassis. Although a fully redundant, two-chassis design using VSS as described in core layer
option 1 provides the greatest redundancy for large-scale locations, the redundant supervisors and line
cards of the Cisco Catalyst 4500-E provide adequate redundancy for smaller locations within a single
platform. Figure 3-16 shows the redundancy of the Cisco Catalyst 4500-E Series in more detail.
Figure 3-16 Highly Redundant Single Core Design Using the Cisco Catalyst 4500-E Platform
Redundant
Supervisor
Core Redundant
Line Cards
Redundant
Power Cycle
Diversed
Fiber Paths
Distribution
228482
This core network design builds a network topology that has similar common design principles to the
VSS-based campus core in core layer design option 1. The future expansion from a single core to a dual
VSS-based core system becomes easier to deploy, and helps retain the original network topology and the
management operation. This cost-effective single resilient core system for a medium-size college
network meets the following four key goals:
• Scalability—The modular Cisco Catalyst 4500 chassis enables flexibility for core network
expansion with high throughput modules and port scalability without compromising network
performance.
• Resiliency—Because hardware or software failure conditions may create catastrophic results in the
network, the single core system must be equipped with redundant system components such as
supervisor, line card, and power supplies. Implementing redundant components increases the core
network resiliency during various types of failure conditions using Non-Stop Forwarding/Stateful
Switch Over (NSF/SSO) and EtherChannel technology.
• Simplicity—The core network can be simplified with redundant network modules and diverse fiber
connections between the core and other network devices. The Layer 3 network ports must be
bundled into a single point-to-point logical EtherChannel to simplify the network, such as the
VSS-enabled campus design. An EtherChannel-based campus network offers similar benefits to an
Multi-chassis EtherChannel (MEC)- based network.
• Cost-effectiveness—A single core system in the core layer helps reduce capital, operational, and
management cost for the medium-sized campus network design.
Core Layer Design Option 3—Cisco Catalyst 4500-Based Collapsed Core Campus Network
Core layer design option 3 is intended for the remote small campus network that has consistent network
services and applications service-level requirements but at reduced network scale. The remote small
campus is considered to be confined within a single multi-story building that may span academic
departments across different floors. To provide consistent services and optimal network performance,
scalability, resiliency, simplification, and cost-effectiveness in the small campus network design must
not be compromised.
As discussed in the previous section, the remote small campus has a two-tier LAN design model, so the
role of the core system is merged with the distribution layer. Remote small campus locations have
consistent design guidance and best practices defined for main, remote large, and remote medium-sized
campus cores. However, for platform selection, the remote medium campus core layer design must be
leveraged to build this two-tier campus core.
Single highly resilient Cisco Catalyst 4500 switches with a Cisco Sup6L-E supervisor must be deployed
in a centralized collapsed core and distribution role that interconnects to wiring closet switches, a shared
service block, and a WAN edge router. The cost-effective supervisor version supports key technologies
such as robust QoS, high availability, security, and much more at a lower scale, making it an ideal
solution for small-scale network designs. Figure 3-17 shows the remote small campus core design in
more detail.
Figure 3-17 Core Layer Option 3 Collapsed Core/Distribution Network Design in Remote Small
Campus Location
Medium Small
Building Building
Access
Distribution
Data Center
Block
Core
Service
Block
WAN PSTN
Edge Gateway
228481
WAN PSTN
Switch-1 Switch-2
VSL
Distribution Distribution Distribution
228484
Distribution Layer Design Option 1—Cisco Catalyst 6500-E Based Distribution Network
Distribution layer design option 1 is intended for main campus and remote large campus locations, and
is based on Cisco Catalyst 6500 Series switches using the Cisco VSS, as shown in Figure 3-19.
Floor 1 – TP
Conferance
Room Floor 2 –
Science Lab Floor 3 –
Library
Access
Switch-1 Switch-2
VSL
Distribution
228485
The distribution block and core network operation changes significantly when redundant
Cisco Catalyst 6500-E Series switches are deployed in VSS mode in both the distribution and core
layers. Clustering redundant distribution switches into a single logical system with VSS introduces the
following technical benefits:
• A single logical system reduces operational, maintenance, and ownership cost.
• A single logical IP gateway develops a unified point-to-point network topology in the distribution
block, which eliminates traditional protocol limitations and enables the network to operate at full
capacity.
• Implementing the distribution layer in VSS mode eliminates or reduces several deployment barriers,
such as spanning-tree loop, Hot Standby Routing Protocol (HSRP)/Gateway Load Balancing
Protocol (GLBP)/Virtual Router Redundancy Protocol (VRRP), and control plane overhead.
Design Option – 1 Design Option – 2 Design Option – 3 Design Option – 4 Design Option – 5
Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2
VSS
Domain
ID : 1
Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2 Switch-1 Switch-2
Access VSL VSL VSL VSL VSL
VSS
228486
Domain
ID : 2
Access
Distribution
228487
Cisco Catalyst 4500-E – Sup6-E Cisco Catalyst 4500-E – Sup6E-L
The hybrid distribution block must be deployed with the next-generation supervisor Sup6-E module.
Implementing redundant Sup6-Es in the distribution layer can interconnect access layer switches and
core layer switches using a single point-to-point logical connection. This cost-effective and resilient
distribution design option leverages core layer design option 2 to take advantage of all the operational
consistency and architectural benefits.
Alternatively, the multilayer distribution block option requires the Cisco Catalyst 4500-E Series Switch
with next-generation supervisor Sup6E-L deployed. The Sup6E-L supervisor is a cost-effective
distribution layer solution that meets all network foundation requirements and can operate at moderate
capacity, which can handle a medium-sized college distribution block.
This distribution layer network design provides protection against various types of hardware and
software failure, and can deliver consistent sub-second network recovery. A single Catalyst 4500-E with
multiple redundant system components can be deployed to offer 1+1 in-chassis redundancy, as shown in
Figure 3-22.
Floor 1 – TP
Conference
Room Floor 2 –
Science Lab Floor 3 –
Library
Access
Redundant
Supervisor
Redundant
Distribution
Line Cards
228488
Redundant
Power Cycle
Distribution layer design option 2 is intended for the remote medium-sized campus locations, and is
based on the Cisco Catalyst 4500 Series Switches. Although the remote medium and the main and remote
large campus locations share similar design principles, the remote medium campus location is smaller
and may not need a VSS-based redundant design. Fortunately, network upgrades and expansion become
easier to deploy using distribution layer option 2, which helps retain the original network topology and
the management operation. Distribution layer design option 2 meets the following goals:
• Scalability—The modular Cisco Catalyst 4500 chassis provides the flexibility for distribution block
expansion with high throughput modules and port scalability without compromising network
performance.
• Resiliency—The single distribution system must be equipped with redundant system components,
such as supervisor, line card, and power supplies. Implementing redundant components increases
network resiliency during various types of failure conditions using NSF/SSO and EtherChannel
technology.
• Simplicity—This cost-effective design simplifies the distribution block similarly to a VSS-enabled
distribution system. The single IP gateway design develops a unified point-to-point network
topology in the distribution block to eliminate traditional protocol limitations, enabling the network
to operate at full capacity.
• Cost-effectiveness—The single distribution system in the core layer helps reduce capital,
operational, and ownership cost for the medium-sized campus network design.
Distribution Layer Design Option 3—Cisco Catalyst 3750-E StackWise-Based Distribution Network
Distribution layer design option 3 is intended for a very small building with a limited number of wiring
closet switches in the access layer that connects remote classrooms or and office network with a
centralized core, as shown in Figure 3-23.
Access
228489
Distribution
While providing consistent network services throughout the campus, a number of network users and
IT-managed remote endpoints can be limited in this building. This distribution layer design option
recommends using the Cisco Catalyst 3750-E StackWise Plus Series platform for the distribution layer
switch.
The fixed-configuration Cisco Catalyst 3750-E Series Switch is a multilayer platform that supports
Cisco StackWise Plus technology to simplify the network and offers flexibility to expand the network as
it grows. With Cisco StackWise Plus technology, the Catalyst 3750-E can be clustered into a high-speed
backplane stack ring to logically build as a single large distribution system. Cisco StackWise Plus
supports up to nine switches into single stack ring for incremental network upgrades, and increases
effective throughput capacity up to 64 Gbps. The chassis redundancy is achieved via stacking, in which
member chassis replicate the control functions with each member providing distributed packet
forwarding. This is achieved by stacked group members acting as a single virtual Catalyst 3750-E
switch. The logical switch is represented as one switch by having one stack member act as the master
switch. Thus, when failover occurs, any member of the stack can take over as a master and continue the
same services. It is a 1:N form of redundancy where any member can become the master. This
distribution layer design option is ideal for the remote small campus location.
Access
228490
Sup6E-L StackWise Plus
Note This section does not explain the fundamentals of TCP/IP addressing; for more details, see the many
Cisco Press publications that cover this topic.
Although enabling routing functions in the core is a simple task, the routing blueprint must be well
understood and designed before implementation, because it provides the end-to-end reachability path of
the college network. For an optimized routing design, the following three routing components must be
identified and designed to allow more network growth and provide a stable network, independent of
scale:
• Hierarchical network addressing—Structured IP network addressing in the community college
LAN and/or WAN design is required to make the network scalable, optimal, and resilient.
• Routing protocol—Cisco IOS supports a wide range of Interior Gateway Protocols (IGPs). Cisco
recommends deploying a single routing protocol across the community college network
infrastructure.
• Hierarchical routing domain—Routing protocols must be designed in a hierarchical model that
allows the network to scale and operate with greater stability. Building a routing boundary and
summarizing the network minimizes the topology size and synchronization procedure, which
improves overall network resource use and re-convergence.
The criteria for choosing the right protocol vary based on the end-to-end network infrastructure.
Although all the routing protocols that Cisco IOS currently supports can provide a viable solution,
network architects must consider all the following critical design factors when selecting the right routing
protocol to be implemented throughout the internal network:
• Network design—Requires a proven protocol that can scale in full-mesh campus network designs
and can optimally function in hub-and-spoke WAN network topologies.
• Scalability—The routing protocol function must be network- and system-efficient and operate with
a minimal number of updates and re-computation, independent of the number of routes in the
network.
• Rapid convergence—Link-state versus DUAL re-computation and synchronization. Network
re-convergence also varies based on network design, configuration, and a multitude of other factors
that may be more than a specific routing protocol can handle. The best convergence time can be
achieved from a routing protocol if the network is designed to the strengths of the protocol.
• Operational—A simplified routing protocol that can provide ease of configuration, management,
and troubleshooting.
Cisco IOS supports a wide range of routing protocols, such as Routing Information Protocol (RIP) v1/2,
Enhanced Interior Gateway Routing Protocol (EIGRP), Open Shortest Path First (OSPF), and
Intermediate System-to-Intermediate System (IS-IS). However, Cisco recommends using EIGRP or
OSPF for this network design. EIGRP is a popular version of an Interior Gateway Protocol (IGP) because
it has all the capabilities needed for small to large-scale networks, offers rapid network convergence, and
above all is simple to operate and manage. OSPF is popular link-state protocol for large-scale enterprise
and service provider networks. OSPF enforces hierarchical routing domains in two tiers by
implementing backbone and non-backbone areas. The OSPF area function depends on the network
connectivity model and the role of each OSPF router in the domain. OSPF can scale higher but the
operation, configuration, and management might become too complex for the community college LAN
network infrastructure.
Other technical factors must be considered when implementing OSPF in the network, such as OSPF
router type, link type, maximum transmission unit (MTU) considerations, designated router
(DR)/backup designated router (BDR) priority, and so on. This document provides design guidance for
using simplified EIGRP in the community college campus and WAN network infrastructure.
Note For detailed information on EIGRP and OSPF, see the following URL:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Campus/routed-ex.html.
EIGRP is a balanced hybrid routing protocol that builds neighbor adjacency and flat routing topology on
a per autonomous system (AS) basis. Cisco recommends considering the following three critical design
tasks before implementing EIGRP in the community college LAN core layer network:
• EIGRP autonomous system—The Layer 3 LAN and WAN infrastructure of the community college
design must be deployed in a single EIGRP AS, as shown in Figure 3-25. A single EIGRP AS
reduces operational tasks and prevents route redistribution, loops, and other problems that may
occur because of misconfiguration.
Figure 3-25 Sample End-to-End EIGRP Routing Design in Community College LAN Network
Main Campus
EIGRP
AS 100
VSL
VSL
QFP
EIGRP
AS 100
WAN
VSL
VSL
In the example in Figure 3-25, AS100 is the single EIGRP AS for the entire design.
Main Campus
Access
VSL
Aggregator Distribution
VSL Core
Aggregator
QFP WAN
Edge
WAN
Aggregator Aggregator
VSL
Aggregator
Aggregator
VSL
Aggregator
228492
By default, EIGRP speakers transmit Hello packets every 5 seconds, and terminates EIGRP adjacency
if the neighbor fails to receive it within 15 seconds of hold-down time. In this network design, Cisco
recommends retaining default EIGRP Hello and Hold timers on all EIGRP-enabled platforms.
Because the distribution layer can be deployed with both Layer 2 and Layer 3 technologies, the following
two network designs are recommended:
• Multilayer
• Routed access
V-Shape
Network
Single Loop-free
Etherchannel
VLAN 90
Cisco recommends that the hybrid multilayer access-distribution block design use a loop-free network
topology, and span a few VLANs that require such flexibility, such as the management VLAN.
Ensuring a loop-free topology is critical in a multilayer network design. Spanning-Tree Protocol (STP)
dynamically develops a loop-free multilayer network topology that can compute the best forwarding
path and provide redundancy. Although STP behavior is deterministic, it is not optimally designed to
mitigate network instability caused by hardware miswiring or software misconfiguration. Cisco has
developed several STP extensions to protect against network malfunctions, and to increase stability and
availability. All Cisco Catalyst LAN switching platforms support the complete STP toolkit suite that
must be enabled globally on individual logical and physical ports of the distribution and access layer
switches.
Figure 3-28 shows an example of enabling various STP extensions on distribution and access layer
switches in all campus sites.
VSL
STP Root Bridge
UDLD
BPDU Guard
228494
Library
Layer 2 Port
Figure 3-29 Layer 2 and Layer 3 Boundaries for Multilayer and Routed Access Network Design
VSL VSL
Core Core
Routing Routing
STP Routing
Layer 2
Access Access
Layer 2
Admin Library Arts Admin Library Arts
VLAN VLAN VLAN VLAN VLAN VLAN
10 20 30 10 20 30
Multi-Layer Network Routed-Access Network
228467
Routed-access network design enables Layer 3 access switches to perform Layer 2 demarcation point
and provide Inter-VLAN routing and gateway function to the endpoints. The Layer 3 access switches
makes more intelligent, multi-function and policy-based routing and switching decision like
distribution-layer switches.
Although Cisco VSS and a single redundant distribution design are simplified with a single
point-to-point EtherChannel, the benefits in implementing the routed access design in community
colleges are as follows:
• Eliminates the need for implementing STP and the STP toolkit on the distribution system. As a best
practice, the STP toolkit must be hardened at the access layer.
• Shrinks the Layer 2 fault domain, thus minimizing the number of denial-of-service (DoS)/
distributed denial-of-service (DDoS) attacks.
• Bandwidth efficiency—Improves Layer 3 uplink network bandwidth efficiency by suppressing
Layer 2 broadcasts at the edge port.
• Improves overall collapsed core and distribution resource utilization.
Enabling Layer 3 functions in the access-distribution block must follow the same core network designs
as mentioned in previous sections to provide network security as well as optimize the network topology
and system resource utilization:
• EIGRP autonomous system—Layer 3 access switches must be deployed in the same EIGRP AS as
the distribution and core layer systems.
• EIGRP adjacency protection—EIGRP processing must be enabled on uplink Layer 3
EtherChannels, and must block remaining Layer 3 ports by default in passive mode. Access switches
must establish secured EIGRP adjacency using the MD5 hash algorithm with the aggregation
system.
• EIGRP network boundary—All EIGRP neighbors must be in a single AS to build a common network
topology. The Layer 3 access switches must be deployed in EIGRP stub mode for a concise network
view.
Figure 3-30 Designing and Optimizing EIGRP Network Boundary for the Access Layer
VSL VSL
EIGRP
AS-100
VSL VSL
Summarized Aggregator
Network
Summarized Network
Non-Summarized + Default Network
Connected
Network
EIGRP EIGRP
Stub AS-100
Mode
Admin Library Arts Admin Library Arts
VLAN VLAN VLAN VLAN VLAN VLAN
10 20 30 10 20 30
Routed-Access Network Routed-Access Network
228495
During the multicast network design phase, community college network architects must select a range
of multicast sources from the limited scope pool (239/8).
It is assumed that each community college site has a wide range of local multicast sources in the data
center for distributed community college IT-managed media and student research and development
applications. In such a distributed multicast network design, Cisco recommends deploying PIM RP on
each site for wired or wireless multicast receivers and sources to join and register at the closest RP. The
community college reference design recommends PIM-SM RP placement on a VSS-enabled and single
resilient core system in the three-tier campus design, or on the collapsed core/distribution system in the
two-tier campus design model.
PIM-SM RP redundancy and load sharing becomes imperative in the community college LAN design,
because each recommended core layer design model provides resiliency and simplicity. In the
Cisco Catalyst 6500 VSS-enabled core layer, the dynamically discovered group-to-RP entries are fully
synchronized to the standby switch. Combining NSF/SSO capabilities with IPv4 multicast reduces the
network recovery time and retains the user and application performance at an optimal level. In the
non-VSS-enabled network design, PIM-SM uses Anycast RP and Multicast Source Discovery Protocol
(MSDP) for node failure protection. PIM-SM redundancy and load sharing is simplified with the Cisco
VSS-enabled core. Because VSS is logically a single system and provides node protection, there is no
need to implement Anycast RP and MSDP on a VSS-enabled PIM-SM RP.
MSDP allows PIM RPs to share information about the active sources. PIM-SM RPs discover local
receivers through PIM join messages, while the multicast source can be in a local or remote network
domain. MSDP allows each multicast domain to maintain an independent RP that does not rely on other
multicast domains, but does enable RPs to forward traffic between domains. PIM-SM is used to forward
the traffic between the multicast domains.
Anycast RP is a useful application of MSDP. Originally developed for interdomain multicast
applications, MSDP used with Anycast RP is an intradomain feature that provides redundancy and load
sharing capabilities. Large networks typically use Anycast RP for configuring a PIM-SM network to
meet fault tolerance requirements within a single multicast domain.
The community college LAN multicast network must be designed with Anycast RP. PIM-SM RP at the
main or the centralized core must establish an MSDP session with RP on each remote site to exchange
distributed multicast source information and allow RPs to join SPT to active sources as needed.
Figure 3-31 shows an example of a community college LAN multicast network design.
PIM-SM Access
VSL
Distribution
PIM-SM
PIM-SM VSL
PIM-SM
QFP WAN
Edge PIM-SM RP
WAN
MSDP Peering PIM-SM
Anycast RP
PIM-SM PIM-SM
Aggregator PIM-SM RP
VSL
VSL
PIM-SM RP
PIM-SM RP PIM-SM RP
PIM-SM RP
PIM-SM
PIM-SM RP Remote Remote Remote
Large Campus Medium Campus Small Campus
VSL
PIM-SM
228496
Remote Remote Remote
Large Campus Medium Campus Small Campus
IGMP is still required when a Layer 3 access layer switch is deployed in the routed access network
design. Because the Layer 3 boundary is pushed down to the access layer, IGMP communication is
limited between a receiver host and the Layer 3 access switch. In addition to the unicast routing protocol,
PIM-SM must be enabled at the Layer 3 access switch to communicate with RPs in the network.
In a PIM-SM network, an unwanted traffic source can be controlled with the pim accept-register
command. When the source traffic hits the first-hop router, the first-hop router (DR) creates the (S,G)
state and sends a PIM source register message to the RP. If the source is not listed in the accept-register
filter list (configured on the RP), the RP rejects the register and sends back an immediate Register-Stop
message to the DR. The drawback with this method of source filtering is that with the pim
accept-register command on the RP, the PIM-SM (S,G) state is still created on the first-hop router of
the source. This can result in traffic reaching receivers local to the source and located between the source
and the RP. Furthermore, because the pim accept-register command works on the control plane of the
RP, this can be used to overload the RP with fake register messages and possibly cause a DoS condition.
Like the multicast source, any router can be misconfigured or can maliciously advertise itself as a
multicast RP in the network with the valid multicast group address. With a static RP configuration, each
PIM-enabled router in the network can be configured to use static RP for the multicast source and
override any other Auto-RP or BSR multicast router announcement from the network.
embracing media applications as the next cycle of convergence, community college IT departments can
think holistically about their network design and its readiness to support the coming tidal wave of media
applications, and develop a network-wide strategy to ensure high quality end-user experiences.
The community college LAN infrastructure must set the administrative policies to provide differentiated
forwarding services to the network applications, users and endpoints to prevent contention. The
characteristic of network services and applications must be well understood, so that policies can be
defined that allow network resources to be used for internal applications, to provide best-effort services
for external traffic, and to keep the network protected from threats.
The policy for providing network resources to an internal application is further complicated when
interactive video and real-time VoIP applications are converged over the same network that is switching
mid-to-low priority data traffic. Deploying QoS technologies in the campus allows different types of
traffic to contend inequitably for network resources. Real-time applications such as voice, interactive,
and physical security video can be given priority or preferential services over generic data applications,
but not to the point that data applications are starving for bandwidth.
Figure 3-32 Community College LAN Campus 12-Class QoS Policy Recommendation
Admission
Application Class Media Application Examples PHB Queuing and Dropping
Control
Multimedia Conferencing Cisco CUPC, WebEx AF4 Required BW Queue + DSCP WRED
Multimedia Streaming Cisco DMS, IP/TV AF3 Recommended BW Queue + DSCP WRED
Transactional Data ERP Apps, CRM Apps AF2 BW Queue + DSCP WRED
228497
Scavenger YouTube, Gaming, P2P CS1 Min BW Queue
additionally, traffic in this class may be subject to policing and re-marking. Sample applications
include Cisco Unified Personal Communicator, Cisco Unified Video Advantage, and the Cisco
Unified IP Phone 7985G.
• Multimedia streaming—This service class is intended for video-on-demand (VoD) streaming video
flows, which, in general, are more elastic than broadcast/live streaming flows. Traffic in this class
should be marked AF Class 3 (AF31) and should be provisioned with a guaranteed bandwidth queue
with DSCP-based WRED enabled. Admission control is recommended on this traffic class (though
not strictly required) and this class may be subject to policing and re-marking. Sample applications
include Cisco Digital Media System VoD streams.
• Network control—This service class is intended for network control plane traffic, which is required
for reliable operation of the enterprise network. Traffic in this class should be marked CS6 and
provisioned with a (moderate, but dedicated) guaranteed bandwidth queue. WRED should not be
enabled on this class, because network control traffic should not be dropped (if this class is
experiencing drops, the bandwidth allocated to it should be re-provisioned). Sample traffic includes
EIGRP, OSPF, Border Gateway Protocol (BGP), HSRP, Internet Key Exchange (IKE), and so on.
• Call-signaling—This service class is intended for signaling traffic that supports IP voice and video
telephony. Traffic in this class should be marked CS3 and provisioned with a (moderate, but
dedicated) guaranteed bandwidth queue. WRED should not be enabled on this class, because
call-signaling traffic should not be dropped (if this class is experiencing drops, the bandwidth
allocated to it should be re-provisioned). Sample traffic includes Skinny Call Control Protocol
(SCCP), Session Initiation Protocol (SIP), H.323, and so on.
• Operations/administration/management (OAM)—This service class is intended for network
operations, administration, and management traffic. This class is critical to the ongoing maintenance
and support of the network. Traffic in this class should be marked CS2 and provisioned with a
(moderate, but dedicated) guaranteed bandwidth queue. WRED should not be enabled on this class,
because OAM traffic should not be dropped (if this class is experiencing drops, the bandwidth
allocated to it should be re-provisioned). Sample traffic includes Secure Shell (SSH), Simple
Network Management Protocol (SNMP), Syslog, and so on.
• Transactional data (or low-latency data)—This service class is intended for interactive,
“foreground” data applications (foreground refers to applications from which users are expecting a
response via the network to continue with their tasks; excessive latency directly impacts user
productivity). Traffic in this class should be marked AF Class 2 (AF21) and should be provisioned
with a dedicated bandwidth queue with DSCP-WRED enabled. This traffic class may be subject to
policing and re-marking. Sample applications include data components of multimedia collaboration
applications, Enterprise Resource Planning (ERP) applications, Customer Relationship
Management (CRM) applications, database applications, and so on.
• Bulk data (or high-throughput data)—This service class is intended for non-interactive
“background” data applications (background refers to applications from which users are not
awaiting a response via the network to continue with their tasks; excessive latency in response times
of background applications does not directly impact user productivity). Traffic in this class should
be marked AF Class 1 (AF11) and should be provisioned with a dedicated bandwidth queue with
DSCP-WRED enabled. This traffic class may be subject to policing and re-marking. Sample
applications include E-mail, backup operations, FTP/SFTP transfers, video and content distribution,
and so on.
• Best effort (or default class)—This service class is the default class. The vast majority of
applications will continue to default to this best-effort service class; as such, this default class should
be adequately provisioned. Traffic in this class is marked default forwarding (DF or DSCP 0) and
should be provisioned with a dedicated queue. WRED is recommended to be enabled on this class.
• Scavenger (or low-priority data)—This service class is intended for non-business-related traffic
flows, such as data or video applications that are entertainment and/or gaming-oriented. The
approach of a less-than Best-Effort service class for non-business applications (as opposed to
shutting these down entirely) has proven to be a popular, political compromise. These applications
are permitted on enterprise networks, as long as resources are always available for business-critical
voice, video, and data applications. However, as soon as the network experiences congestion, this
class is the first to be penalized and aggressively dropped. Traffic in this class should be marked CS1
and should be provisioned with a minimal bandwidth queue that is the first to starve should network
congestion occur. Sample traffic includes YouTube, Xbox Live/360 movies, iTunes, BitTorrent, and
so on.
VSL
Classification,
Marking and
Queueing
Trust
VSL
Classification,
Marking and
Trust
Queueing
Trust, Classification,
Marking, Policing Queueing and WTD
and Queueing
IP
228498
QoSTrust Boundary Ingress QoS Ploicy Egress QoS Ploicy
A fundamental QoS design principle is to always enable QoS policies in hardware rather than software
whenever possible. Cisco IOS routers perform QoS in software, which places incremental loads on the
CPU, depending on the complexity and functionality of the policy. Cisco Catalyst switches, on the other
hand, perform QoS in dedicated hardware application-specific integrated circuits (ASICs) on
Ethernet-based ports, and as such do not tax their main CPUs to administer QoS policies. This allows
complex policies to be applied at line rates even up to Gigabit or 10-Gigabit speeds.
When classifying and marking traffic, a recommended design principle is to classify and mark
applications as close to their sources as technically and administratively feasible. This principle
promotes end-to-end differentiated services and PHBs.
In general, it is not recommended to trust markings that can be set by users on their PCs or other similar
devices, because users can easily abuse provisioned QoS policies if permitted to mark their own traffic.
For example, if an EF PHB has been provisioned over the network, a PC user can easily configure all
their traffic to be marked to EF, thus hijacking network priority queues to service non-realtime traffic.
Such abuse can easily ruin the service quality of realtime applications throughout the college campus.
On the other hand, if community college network administrator controls are in place that centrally
administer PC QoS markings, it may be possible and advantageous to trust these.
Following this rule, it is recommended to use DSCP markings whenever possible, because these are
end-to-end, more granular, and more extensible than Layer 2 markings. Layer 2 markings are lost when
the media changes (such as a LAN-to-WAN/VPN edge). There is also less marking granularity at
Layer 2. For example, 802.1P supports only three bits (values 0–7), as does Multiprotocol Label
Switching Experimental (MPLS EXP). Therefore, only up to eight classes of traffic can be supported at
Layer 2, and inter-class relative priority (such as RFC 2597 Assured Forwarding Drop Preference
markdown) is not supported. Layer 3-based DSCP markings allow for up to 64 classes of traffic, which
provides more flexibility and is adequate in large-scale deployments and for future requirements.
As the network border blurs between enterprise and education community network and service
providers, the need for interoperability and complementary QoS markings is critical. Cisco recommends
following the IETF standards-based DSCP PHB markings to ensure interoperability and future
expansion. Because the community college voice, video, and data applications marking
recommendations are standards-based, as previously discussed, community colleges can easily adopt
these markings to interface with service provider classes of service.
There is little reason to forward unwanted traffic that gets policed and drop by a subsequent tier node,
especially when unwanted traffic is the result of DoS or worm attacks in the college network. Excessive
volume attack traffic can destabilize network systems, which can result in outages. Cisco recommends
policing traffic flows as close to their sources as possible. This principle applies also to legitimate flows,
because worm-generated traffic can masquerade under legitimate, well-known TCP/UDP ports and
cause extreme amounts of traffic to be poured into the network infrastructure. Such excesses should be
monitored at the source and marked down appropriately.
Whenever supported, markdown should be done according to standards-based rules, such as RFC 2597
(AF PHB). For example, excess traffic marked to AFx1 should be marked down to AFx2 (or AFx3
whenever dual-rate policing such as defined in RFC 2698 is supported). Following such markdowns,
congestion management policies, such as DSCP-based WRED, should be configured to drop AFx3 more
aggressively than AFx2, which in turn should be dropped more aggressively than AFx1.
Critical media applications require uncompromised performance and service guarantees regardless of
network conditions. Enabling outbound queueing in each network tier provides end-to-end service
guarantees during potential network congestion. This common principle applies to
campus-to-WAN/Internet edges, where speed mismatches are most pronounced; and campus interswitch
links, where oversubscription ratios create the greater potential for network congestion.
Because each application class has unique service level requirements, each should be assigned optimally
a dedicated queue. A wide range of platforms in varying roles exist in community college networks, so
each must be bounded by a limited number of hardware or service provider queues. No fewer than four
queues are required to support QoS policies for various types of applications, specifically as follows:
• Realtime queue (to support a RFC 3246 EF PHB service)
• Guaranteed-bandwidth queue (to support RFC 2597 AF PHB services)
• Default queue (to support a RFC 2474 DF service)
• Bandwidth-constrained queue (to support a RFC 3662 scavenger service)
Additional queuing recommendations for these classes are discussed next.
The realtime or strict priority class corresponds to the RFC 3246 EF PHB. The amount of bandwidth
assigned to the realtime queuing class is variable. However, if the majority of bandwidth is provisioned
with strict priority queuing (which is effectively a FIFO queue), the overall effect is a dampening of QoS
functionality, both for latency- and jitter-sensitive realtime applications (contending with each other
within the FIFO priority queue), and also for non-realtime applications (because these may periodically
receive significant bandwidth allocation fluctuations, depending on the instantaneous amount of traffic
being serviced by the priority queue). Remember that the goal of convergence is to enable voice, video,
and data applications to transparently co-exist on a single community college network infrastructure.
When realtime applications dominate a link, non-realtime applications fluctuate significantly in their
response times, destroying the transparency of the converged network.
For example, consider a 45 Mbps DS3 link configured to support two Cisco TelePresence CTS-3000
calls with an EF PHB service. Assuming that both systems are configured to support full high definition,
each such call requires 15 Mbps of strict-priority queuing. Before the TelePresence calls are placed,
non-realtime applications have access to 100 percent of the bandwidth on the link; to simplify the
example, assume there are no other realtime applications on this link. However, after these TelePresence
calls are established, all non-realtime applications are suddenly contending for less than 33 percent of
the link. TCP windowing takes effect and many applications hang, timeout, or become stuck in a
non-responsive state, which usually translates into users calling the IT help desk to complain about the
network (which happens to be functioning properly, albeit in a poorly-configured manner).
Note As previously discussed, Cisco IOS software allows the abstraction (and thus configuration) of multiple
strict priority LLQs. In such a multiple LLQ context, this design principle applies to the sum of all LLQs
to be within one-third of link capacity.
It is vitally important to understand that this strict priority queuing rule is simply a best practice design
recommendation and is not a mandate. There may be cases where specific business objectives cannot be
met while holding to this recommendation. In such cases, the community college network administrator
must provision according to their detailed requirements and constraints. However, it is important to
recognize the tradeoffs involved with over-provisioning strict priority traffic and its negative
performance impact, both on other realtime flows and also on non-realtime-application response times.
And finally, any traffic assigned to a strict-priority queue should be governed by an admission control
mechanism.
The best effort class is the default class for all traffic that has not been explicitly assigned to another
application-class queue. Only if an application has been selected for preferential/deferential treatment
is it removed from the default class. Because most community colleges may have several types of
applications running in networks, adequate bandwidth must be provisioned for this class as a whole to
handle the number and volume of applications that default to it. Therefore, Cisco recommends reserving
at least 25 percent of link bandwidth for the default best effort class.
Whenever the scavenger queuing class is enabled, it should be assigned a minimal amount of link
bandwidth capacity, such as 1 percent, or whatever the minimal bandwidth allocation that the platform
supports. On some platforms, queuing distinctions between bulk data and scavenger traffic flows cannot
be made, either because queuing assignments are determined by class of service (CoS) values (and both
of these application classes share the same CoS value of 1), or because only a limited amount of
hardware queues exist, precluding the use of separate dedicated queues for each of these two classes. In
such cases, the scavenger/bulk queue can be assigned a moderate amount of bandwidth, such as 5
percent.
These queuing rules are summarized in Figure 3-34, where the inner pie chart represents a hardware or
service provider queuing model that is limited to four queues and the outer pie chart represents a
corresponding, more granular queuing model that is not bound by such constraints.
VoIP
Telephony
Best
Effort Broadcast
Video
Best >
Effort Realtime
Realtime
Scavenger Scavenger Interactive
Transactional Multimedia
Data Conferencing
OAM
228499
Figure 3-35 Community College LAN Design High-Availability Goals, Strategy, and Technologies
EtherChannel/MEC
NSF/SSO ISSU
Resilient
Technologies UDLD
Stack Wise eFSU
IP Event Dampening
228500
Network Resiliency Best Practices
The most common network fault occurrence in the LAN network is a link failure between two systems.
Link failures can be caused by issues such as a fiber cut, miswiring, and so on. Redundant parallel
physical links between two systems can increase availability, but also change how overall higher layer
protocols construct the adjacency and loop-free forwarding topology to the parallel physical paths.
Deploying redundant parallel paths in the recommended community college LAN design by default
develops a non-optimal topology that keeps the network under-utilized and requires protocol-based
network recovery. In the same network design, the routed access model eliminates such limitations and
enables the full load balancing capabilities to increase bandwidth capacity and minimize the application
impact during a single path failure. To develop a consistent network resiliency service in the centralized
main and remote college campus sites, the following basic principles apply:
• Deploying redundant parallel paths are the basic requirement to employ network resiliency at any
tier. It is critical to simplify the control plane and forwarding plane operation by bundling all
physical paths into a single logical bundled interface (EtherChannel). Implement a defense-in-depth
approach to failure detection and recovery mechanisms. An example of this is configuring the
UniDirectional Link Detection (UDLD) protocol, which uses a Layer 2 keep-alive to test that the
switch-to-switch links are connected and operating correctly, and acts as a backup to the native
Layer 1 unidirectional link detection capabilities provided by 802.3z and 802.3ae standards. UDLD
is not an EtherChannel function; it operates independently over each individual physical port at
Layer 2 and remains transparent to the rest of the port configuration. Therefore, UDLD can be
deployed on ports implemented in Layer 2 or Layer 3 modes.
• Ensure that the design is self-stabilizing. Hardware or software errors may cause ports to flap, which
creates false alarms and destabilizes the network topology. Implementing route summarization
advertises a concise topology view to the network, which prevents core network instability.
However, within the summarized boundary, the flood may not be protected. Deploy IP event
dampening as an tool to prevent the control and forwarding plane impact caused by physical
topology instability.
These principles are intended to be a complementary part of the overall structured modular design
approach to the campus design, and serve primarily to reinforce good resilient design practices.
Redundant power supplies for network systems protect against power outages, power supply failures,
and so on. It is important not only to protect the internal network system but also the endpoints that rely
on power delivery over the Ethernet network. Redundant power systems can be deployed in the two
following configuration modes:
• Modular switch—Dual power supplies can be deployed in modular switching platforms such as the
Cisco Catalyst 6500 and 4500-E Series platforms. By default, the power supply operates in
redundant mode, offering the 1+1 redundant option. Overall power capacity planning must be done
to dynamically allow for network growth. Lower power supplies can be combined to allocate power
to all internal and external resources, but may not be able to offer power redundancy.
• Fixed configuration switch—The power supply in fixed configuration switches can be internal or use
Cisco RPS 2300 external power supplies. A single Cisco RPS 2300 power supply uses a modular
power supply and fan for flexibility, and can deliver power to multiple switches. Deploying an
internal and external power supply solution protects critical access layer switches during power
outages, and provides completes fault transparency and constant network availability.
Device or node resiliency in modular Cisco Catalyst 6500/4500 platforms and Cisco StackWise provides
a 1+1 redundancy option with enterprise-class high availability and deterministic network recovery time.
The following sub-sections provide high availability design details, as well as graceful network recovery
techniques that do not impact the control plane and provide constant forwarding capabilities during
failure events.
Stateful Switchover
The stateful switchover (SSO) capability in modular switching platforms such as the Cisco Catalyst 4500
and 6500 provides complete carrier-class high availability in the campus network. Cisco recommends
distribution and core layer design model be the center point of the entire college communication
network. Deploying redundant supervisors in the mission-critical distribution and core system provides
non-stop communication throughout the network. To provide 99.999 percent service availability in the
access layer, the Catalyst 4500 must be equipped with redundant supervisors to critical endpoints, such
as Cisco TelePresence.
Cisco StackWise is an low-cost solution to provide device-level high availability. Cisco StackWise is
designed with unique hardware and software capabilities that distribute, synchronize, and protect
common forwarding information across all member switches in a stack ring. During master switch
failure, the new master switch re-election remains transparent to the network devices and endpoints.
Deploying Cisco StackWise according to the recommended guidelines protects against network
interruption, and recovers the network in sub-seconds during master switch re-election.
Bundling SSO with NSF capability and the awareness function allows the network to operate without
errors during a primary supervisor module failure. Users of realtime applications such as VoIP do not
hang up the phone, and IP video surveillance cameras do not freeze.
Non-Stop Forwarding
Cisco VSS and the single highly resilient-based campus system provides uninterrupted network
availability using non-stop forwarding (NSF) without impacting end-to-end application performance.
The Cisco VSS and redundant supervisor system is an NSF-capable platform; thus, every network device
that connects to VSS or the redundant supervisor system must be NSF-aware to provide optimal
resiliency. By default, most Cisco Layer 3 network devices are NSF-aware systems that operate in NSF
helper mode for graceful network recovery. (See Figure 3-36.)
Figure 3-36 Community College LAN Design NSF/SSO Capable and Aware Systems
Switch-1 Switch-2
Active Standby
VSL
Core
NSF-Capable
Switch-1 Switch-2
Active Standby
VSL
Distribution
NSF-Capable
Standby
Active
Access
228501
NSF-Aware NSF-Capable
Catalyst 4500—ISSU
Full-image ISSU on the Cisco Catalyst 4500 leverages dual supervisors to allow for a full, in-place
Cisco IOS upgrade, such as moving from 12.2(50)SG to 12.2(53)SG for example. This leverages the
NSF/SSO capabilities of the switch and provides for less than 200 msec of traffic loss during a full
Cisco IOS upgrade.
Having the ability to operate the campus as a non-stop system depends on the appropriate capabilities
being designed-in from the start. Network and device level redundancy, along with the necessary
software control mechanisms, guarantee controlled and fast recovery of all data flows following any
network failure, while concurrently providing the ability to proactively manage the non-stop
infrastructure.
A network upgrade requires planned network and system downtime. VSS offers unmatched network
availability to the core. With the Enhanced Fast Software Upgrade (eFSU) feature, the VSS can continue
to provide network services during the upgrade. With the eFSU feature, the VSS network upgrade
remains transparent and hitless to the applications and end users (see Figure 3-37). Because eFSU works
in conjunction with NSF/SSO technology, the network devices can gracefully restore control and
forwarding information during the upgrade process, while the bandwidth capacity operates at 50 percent
and the data plane can converge within sub-seconds.
For a hitless software update, the ISSU process requires three sequential upgrade events for error-free
software install on both virtual switch systems. Each upgrade event causes traffic to be re-routed to a
redundant MEC path, causing sub-second traffic loss that does not impact realtime network applications,
such as VoIP.
Operating
Bandwidth
Capacity
1
100%
2
50%
eFSU
Events
1 2 3 4 5
Switch 2 - Switch 1 - ISSU Switch 1 -
ISSU ISSU acceptversion ISSU
loadversion loadversion runversion
228502
1
Both virtual-system nodes forwarding traffic
2
Single virtual-system node forwarding traffic
Summary
Designing the LAN network aspects for the community college network design establishes the
foundation for all other aspects within the service fabric (WAN, security, mobility, and UC) as well as
laying the foundation to provide safety and security, operational efficiencies, virtual learning
environments, and secure classrooms.
This chapter reviews the two LAN design models recommended by Cisco, as well as where to apply these
models within the various locations of a community college network. Each of the layers is discussed and
design guidance is provided on where to place and how to deploy these layers. Finally, key network
foundation services such as routing, switching, QoS, multicast, and high availability best practices are
given for the entire community college design.