Cloud Networking Scaling Out Data Center Networks
Cloud Networking Scaling Out Data Center Networks
Cloud Networking Scaling Out Data Center Networks
Network architectures, and the networking operating systems that make the cloud possible are fundamentally
different from the highly over-subscribed, hierarchical, multi-tiered and costly legacy solutions of the past.
Increased adoption of high performance servers and applications requiring higher bandwidth is driving adoption
of 10 and 25 Gigabit Ethernet switching in combination with 40 and 100 Gigabit Ethernet. Latest generation
switch silicon supports seamless transition from 10 and 40 Gigabit to 25 and 100 Gigabit Ethernet.
This whitepaper details Arista’s two-tier Spine/Leaf and single-tier Spline™ Universal Cloud Network designs
that provide unprecedented scale, performance and density without proprietary protocols, lock-ins or forklift
upgrades.
arista.com
White Paper
1. No proprietary protocols or vendor lock-ins. Arista believes in open standards. Our proven reference designs show that proprietary
protocols and vendor lock-ins aren’t required to build very large scale-out networks
2. Fewer Tiers is better than More Tiers. Designs with fewer tiers (e.g. a 2-tier Spine/Leaf design rather than 3-tier) decrease cost,
complexity, cabling and power/heat. Single-tier Spline network designs don’t use any ports for interconnecting tiers of switches
so provide the lowest cost per usable port. A legacy design that may have required 3 or more tiers to achieve the required port
count just a few years ago can be achieved in a 1 or 2-tier design.
3. No protocol religion. Arista supports scale-out designs built at layer 2 or layer 3 or hybrid L2/L3 designs with open multi-vendor
supported protocols like VXLAN that combine the flexibility of L2 with the scale-out characteristics of L3.
4. Modern infrastructure should be run active/active. Multi chassis Link Aggregation (MLAG) at layer 2 and Equal Cost Multi-Pathing
(ECMP) at layer 3 enables infrastructure to be built as active/active with no ports blocked so that networks can use all the links
available between any two devices.
5. Designs should be agile and allow for flexibility in port speeds. The inflection point when the majority of servers/compute nodes
connect at 1000Mb to 10G is between 2013-2015. This in turn drives the requirement for network uplinks to migrate from 10G
to 40G and to 100G. Arista switches and reference designs enable that flexibility
6. Scale-out designs enable infrastructure to start small and evolve over time. A two-way ECMP design can grow from 2-way to
4-way, 8-way, 16-way and as far as a 32-way design. An ECMP design can grow over time without significant up-front capital
investment.
7. Large Buffers can be important. Modern Operating Systems, Network Interface Cards (NICs) and scale-out storage arrays make
use of techniques such as TCP Segmentation Offload (TSO), GSO and LSO. These techniques are fundamental to reducing the
CPU cycles required when servers send large amounts of data. A side effect of these techniques is that an application/ OS/
storage that wishes to transmit a chunk of data will offload it to the NIC, which slices the data into segments and puts them
on the wire as back-to-back frames at line-rate. If more than one of these is destined to the same output port then microburst
congestion occurs.
One approach to dealing with bursts is to build a network with minimal oversubscription, overprovisioning links such that they
can absorb bursts. Another is to reduce the fan-in of traffic. An alternative approach is to deploy switches with deep buffers to
absorb the bursts results in packet drops, which in turn results in lower good-put (useful throughput).
8. Consistent features and OS. All Arista switches use the same Arista EOS. There is no difference in platform, software trains or OS.
It’s the same binary image across all switches.
9. Interoperability. Arista switches and designs can interoperate with other networking vendors with no proprietary lock-in.
If the longer-term requirements for number of ports can be fulfilled in a single switch (or pair of switches in a HA design), then
there’s no reason why a single tier spline design should not be used.
arista.com
White Paper
Switch Platform Maximum Ports Switch Interface Types Key Switch Platform Characteristics
10G 25G 40G 50G 100G
Arista 7500E Series RJ45 (100/1000/10G-T) Best suited to two-tier Spine/Leaf designs but can
SFP+/SFP (10G/1G) be used in spline designs
Arista 7508E 1152 288 96 QSFP+ (40G/4x10G)
Arista 7504E 576 144 48 MXP (100G/3x40G/12x10G) MXP ports provide most interface speed flexibility
CFP2 (100G) Deep Buffers
QSFP100 (100G)
Arista 7320X Series QSFP100 (100G/4x25G/2x50G) Best suited for larger Spline end-of-row / middle
QSFP+ (40G/4x10G) of row designs but can be used as spine in
Arista 7328X w/ 1024 1024 256 512 256 two-tier designs with highest 10G / 40G capacity
Arista 7324X 512 512 128 256 128
Arista 7300X Series RJ45 (100/1000/10G-T) Best suited for larger Spline end-of-row / middle
SFP+/SFP (10G/1G) of row designs but can be used as spine in
Arista 7316X 2048 512 QSFP+ (40G/4x10G) two-tier designs with highest 10G / 40G capacity
Arista 7308X 1024 256
Arista 7304X 512 128 RJ45 10GBASE-T enables seamless 100M/1G/10G
transition
Arista 7260X & 7060X QSFP+ (40G/4x10G) Best suited for midsized Spline end-of-row /
QSFP100 (100G/2x50G/4x25G) middle of row designs w/ optical/DAC
Series
connectivity
Arista 7260CX-64 258 256 64 128 64
Arista 7260QX-64 2 64 QSFP+ (40G) Not targeted at Spline designs
Arista 7060CX-32S 130 128 32 64 32 QSFP+ (40G/4x10G) Best suited for small Spline end-of-row / middle of
QSFP100 (100G/2x50G/4x25G) row designs w/ optical/DAC connectivity
Arista 7250X & 7050X QSFP+ (40G/4x10G) Best suited for midsized Spline end-of-row /
middle of row designs w/ optical/DAC
Series connectivity
Arista 7250QX-64 256 32
Arista 7050QX-32S 96 8 RJ45 (100/1000/10G-T) [TX] Best suited for small Spline end-of-row / middle of
SFP+/SFP (10G/1G) [SX] row designs w/ optical/DAC connectivity
Arista 7050SX-128 96 8 QSFP+ (40G/4x10G) [QX]
Arista 7050TX -128 96 6 QSFP+ (40G) [SX/TX-96]
Arista 7050SX-72Q 48 6 MXP (3x40G) [SX/TX-72]
Arista 7050TX-72Q 48 6 QSFP+ (40G/4x10G) [SX/TX-64]
QSFP+ (40G/4x10G) [TX-48]
Arista 7050SX-96 48 6
Arista 7050TX-96 48 6
Arista 7050SX-64 48 4
Arista 7050TX-64 48 4
Arista 7050TX-48 32 4
Arista 7150S Series SFP+/SFP (10G/1G) Best suited for small Spline end-of-row / middle of
row designs w/ optical/DAC connectivity
Arista 7150S-64 48 4 (16) QSFP+ (40G/4x10G)
Arista 7150S-52 52 (13)
Arista 7150S-24 24 (6)
arista.com
White Paper
Scale out designs can start with one pair of spine switches and
some quantity of leaf switches. A two-tier leaf/spine network
design at 3:1 oversubscription for 10G attached devices has for
96x10G ports for servers/compute/storage and 8x40G uplinks Figure 2: Arista Spine/Leaf two-tier network designs provide scale in excess
per leaf switch (Arista 7050SX-128 – 96x10G : 8x40G uplinks = 3:1 of 100,000 physical servers
oversubscribed).
An alternate design for 10G could make use of 100G uplinks, e.g. Arista 7060CX-32S with 24x100G ports running with 4x10G
breakout for 96x10G ports for servers/compute/storage and 8x100G uplinks. Such a design would now only be 1.2:1 oversubscribed.
A design for 25G attached devices could use 7060CX-32S with 24x100G ports broken out to 96x25G ports for servers/compute/
storage and the remaining 8x100G ports for uplinks would also be 3:1 (96x25G = 2400G : 8x100G = 800G).
Two-tier Spine/Leaf network designs enable horizontal scale-out with the number of spine switches growing linearly as the number
of leaf switches grows over time. The maximum scale achievable is a function of the density of the spine switches, the scale-out that
can be achieved (this is a function of cabling and number of physical uplinks from each leaf switch) and desired oversubscription
ratio.
Either modular or fixed configuration switches can be used for spine switches in a two-tier spine/leaf design however the spine
switch choice locks in the maximum scale that a design can grow to. This is shown below in table 2 (10G connectivity to leaf ) and
table 3 (25G connectivity to leaf ).
Table 2: Maximum scale that is achievable in an Arista two-tier Spine Leaf design for 10G attached devices w/ 40G uplinks *
Spine Switch No. Spine Switches Oversubscription Leaf to Spine Leaf Switch Platform Design Supports up to <n> Leaf
Platform (scale-out) Spine to Leaf Connectivity Ports @ 10G
Arista 7504E 2, 4, 8 3:1 8x40G Arista 7050QX-32 or 36 leaf x 96x10G = 3,456 x 10G
72 leaf x 96x10G = 6,912 x 10G
Arista 7050SX-128 144 leaf x 96x10G = 13,824 x 10G
Arista 7508E 2, 4, 8 3:1 8x40G Arista 7050QX-32 or 72 leaf x 96x10G = 6,912 x 10G
144 leaf x 96x10G = 13,824 x 10G
Arista 7050SX-128 288 leaf x 96x10G = 27,648 x 10G
Arista 7508E 2, 4, 8 3:1 16x40G Arista 7250 36 leaf x 192x10G = 6,912 x 10G
72 leaf x 192x10G = 13,824 x 10G
144 leaf x 192x10G = 27,648 x 10G
288 leaf x 19x10G = 55,296 x 10G
Arista 7508E 2, 4, 8 3:1 32x40G Arista 7304X w/ 18 leaf x 384x10G = 6,912 x 10G
7300X-32Q LC 36 leaf x 384x10G = 13,824 x 10G
72 leaf x 384x10G = 27,648 x 10G
144 leaf x 384x10G = 55,296 x 10G
288 leaf x 384x10G = 110,592 x 10G
Arista 7316X 2, 4, 8 3:1 64x40G Arista 7308X w/ 16 leaf x 768x10G = 12,288 x 10G
7300X-32Q LC 32 leaf x 768x10G = 24,576 x 10G
64 leaf x 768x10G = 49,152 x 10G
128 leaf x 768x10G = 98,304 x 10G
256 leaf x 768x10G = 196,608 x 10G
512 leaf x 768x10G = 393,216 x 10G
arista.com
White Paper
Table 3: Maximum scale that is achievable in an Arista two-tier Spine/Leaf design for 25G attached devices w/ 100G uplinks *
Spine Switch No. Spine Switches Oversubscription Leaf to Spine Leaf Switch Platform Design Supports up to <n> Leaf
Platform (scale-out) Spine to Leaf Connectivity Ports @ 10G
Arista 7508E 2, 4, 8 3:1 8x100G Arista 7060CX-32 24 leaf x 96x25G = 2,304 x 25G
48 leaf x 96x25G = 4,608 x 25G
96 leaf x 96x25G = 9,216 x 25G
Arista 7508E 4, 8, 16 3:1 16x100G Arista 7260CX-64 24 leaf x 192x25G = 4,608 x 25G
48 leaf x 192x25G = 9,216 x 25G
96 leaf x 192x25G = 18,432 x 25G
Arista 7328X 4, 8 3:1 8x100G Arista 7060CX-32 128 leaf x 96x25G = 12,288 x 25G
256 leaf x 96x25G = 24,576 x 25G
Arista 7328X 4, 8, 16 3:1 16x100G Arista 7260CX-64 64 leaf x 192x25G = 12,288 x 25G
128 leaf x 192x25G = 24,576 x 25G
256 leaf x 192x25G = 49,152 x 25G
Arista 7328X 16, 32, 64 3:1 64x100G Arista 7328X w/ 64 leaf x 768x25G = 49,152 x 25G
7320CX-32 LC 128 leaf x 768x10G = 98,304 x 25G
256 leaf x 768x10G = 196,608 x 25G
512 leaf x 768x10G = 393,216 x 25G
If the port requirements double from 4 to 8 usable ports, and the building block is a 4-port switch, the network would grow from
a single tier to two-tiers and the number of switches required would increase from 1 switch to 6 switches to maintain the non-
oversubscribed network. For a 2x increase of the usable ports, there is a 3-fold increase in cost per usable port (in reality the cost
goes up even more than 3x as there is also the cost of the interconnect cables or transceivers/fiber.)
If the port count requirements doubles again from 8 to 16, a third tier is required, increasing the number of switches from 6 to 20,
or an additional 3.3 times increase in devices/cost for just a doubling in capacity. Compared to a single tier design, this 3-tier design
now offers 4x more usable ports (16 compared to 4) but does so at over a 20x increase in cost compared to our original single switch
design.
Capital expense (capex) costs go up with increased scale. However, capex costs can be dramatically reduced if a network can be
built using fewer tiers as less cost is sunk into the interconnects between tiers. Operational expense (opex) costs also decrease
dramatically with fewer devices to manage, power and cool, etc. All network designs should be looked at from the perspective of the
cost per usable port (those ports used for servers/storage) over the lifetime of the network. Cost per usable port is calculated as:
arista.com
White Paper
Oversubscription
Oversubscription is the ratio of contention should all devices send traffic at
the same time. It can be measured in a north/south direction (traffic entering/
leaving a data center) as well as east/west (traffic between devices in the data
center). Many legacy data center designs have very large oversubscription
ratios, upwards of 20:1 for both north/south and east/west, because of the
large number of tiers and limited density/ports in the switches but also
because of historically much lower traffic levels per server. Legacy designs
Figure 4: Leaf switch deployed with 3:1 oversubscription
also typically have the L3 gateway in the core or aggregation which also forces (48x10G down to 4x40G up)
traffic between VLANs to traverse all tiers. This is suboptimal on many levels.
Significant increases in the use of multi-core CPUs, server virtualization, flash storage, Big Data and cloud computing have driven the
requirement for modern networks to have lower oversubscription. Current modern network designs have oversubscription ratios of
3:1 or less. In a two-tier design this oversubscription is measured as the ration of downlink ports (to servers/storage) to uplink ports
(to spine switches). For a 64-port leaf switch this equates to 48 ports down to 16 ports up. In contrast a 1:1 design with a 64-port
leaf switch would have 32 ports down to 32 up.
A good rule-of-thumb in a modern data center is to start with an oversubscription ratio of 3:1. Features like Arista Latency Analyzer
(LANZ) can identify hotspots of congestion before it results in service degradation (seen as packet drops) allowing for some
flexibility in modifying the design ratios if traffic is exceeding available capacity.
An ideal scenario always has the uplinks operating at a faster speed than downlinks, in order to ensure there isn’t any blocking due
to micro-bursts of one host bursting at line-rate.
As more leaf switches are added and the ports on the first linecard of the
spine switches are used, a second linecard is added to each chassis and
half of the links are moved to the second linecard. The design can grow Figure 6: First expansion of spine in a scale-out design: second
from 18 leaf switches to 36 leaf switches (1,728 x 10G attached devices @ linecard module
3:1 oversubscription end-to-end as shown in Figure 6.
This process repeats a number of times over. If the uplinks between the leaf and spine are at 10G then each uplink can be distributed
across 4 ports on 4 linecards in each switch.
arista.com
White Paper
The final scale numbers of this design is a function of the port scale/
density of the spine switches, the desired oversubscription ratio and the
number of spine switches. Provided there are two spine switches the
design can be built at layer 2 or layer 3. Final scale for two Arista 7504E
spine switches is 72 leaf switches or 3,456 x 10G @ 3:1 oversubscription
end-to-end. If the design used a pair of Arista 7508E switches then it is Figure 7: Final expansion of spine in a scale-out design: add a fourth
linecard module to each Arista 7504E
double that, i.e., 144 leaf switches for 6,912 x 10G @ 3:1 oversubscription
end-to-end as shown in Figure 7.
Layer 2 or Layer 3
Two-tier Spine/Leaf networks can be built at either layer 2 (VLAN everywhere) or layer 3 (subnets). Each has their advantages and
disadvantages.
Layer 2 designs allow the most flexibility allowing VLANs to span everywhere and MAC addresses to migrate anywhere. The
downside is that there is a single common fault domain (potentially quite large), and as scale is limited by the MAC address table
size of the smallest switch in the network, troubleshooting can be challenging, L3 scale and convergence time will be determined by
the size of the Host Route table on the L3 gateway and the largest non-blocking fan-out network is a spine layer two switches wide
utilizing Multi-chassis Link Aggregation (MLAG).
Layer 3 designs provide the fastest convergence times and the largest scale with fan-out with Equal Cost Multi Pathing (ECMP)
supporting up to 32 or more active/active spine switches. These designs localize the L2/L3 gateway to the first hop switch allowing
for the most flexibility in allowing different classes of switches to be utilized to their maximum capability without any dumbing
down (lowest-common-denominator) between switches.
Layer 3 designs do restrict VLANs and MAC address mobility to a single switch or pair of switches and so limit the scope of VM
mobility to the reach of a single switch or pair of switches, which is typically to within a rack or several racks at most.
VXLAN capabilities can be enabled in software via a virtual switch as part of a virtual server infrastructure. This approach extends
layer 2 over layer 3 but doesn’t address how traffic gets to the correct physical server in the most optimal manner. A software-
based approach to deploying VXLAN or other overlays in the network also costs CPU cycles on the server, as a result of the offload
capabilities on the NIC being disabled.
Hardware VXLAN Gateway capabilities on Arista switches enable the most flexibility, greatest scale and traffic optimization. The
physical network remains at layer 3 for maximum scale-out, best table/capability utilization and fastest convergence times. Servers
continue to provide NIC CPU offload capability and the VXLAN Hardware Gateway provides layer 2 and layer 3 forwarding, alongside
the layer 2 overlay over layer 3 forwarding.
arista.com
White Paper
Arista switch platforms with hardware VXLAN gateway capabilities include: all Arista switches that have the letter ‘E’ or ‘X’ (Arista
7500E Series, Arista 7280E, Arista 7320X Series, Arista 7300E Series, Arista 7060X Series, Arista 7050X Series) and Arista 7150S Series.
These platforms support unicast-based hardware VXLAN gateway capabilities with orchestration via Arista CloudVision, via open
standards-based non-proprietary protocols such as OVSDB or via static configuration. This open approach to hardware VXLAN
gateway capabilities provides end users choice between cloud orchestration platforms without any proprietary vendor lock-in.
Historically a server or host had just a single MAC address and a single IP address. With server virtualization this has become at least
1 MAC address and 1 IP address per virtual server and more than one address/VM if there are additional virtual NICs (vNICs) defined.
Many IT organizations are deploying dual IPv4 / IPv6 stacks (or plan to in the future) and forwarding tables on switches must take
into account both IPv4 and IPv6 table requirements.
If a network is built at Layer 2 every switch learns every MAC address in the network, and the switches at the spine provide the
forwarding between layer 2 and layer 3 and have to provide the gateway host routes.
If a network is built at Layer 3 then the spine switches only need to use IP forwarding for a subnet (or two) per leaf switch and don’t
need to know about any host MAC addresses. Leaf switches need to know about the IP host routes and MAC addresses local to them
but don’t need to know about anything outside the local connections. The only routing prefixes leaf switches require is a single
default route towards the spine switches.
arista.com
White Paper
Regardless of whether the network is built at layer 2 or layer 3 it’s frequently the number of VMs that drives the networking table
sizes. A currently modern x86 server is a dual socket with 6 or 8 CPU cores/socket. Typical enterprise workloads allow for 10 VMs/
CPU core, such that for a typical server to have 60-80 VMs running is not unusual. It is foreseeable that this number will only get
larger in the future.
For a design that is 10 VMs/CPU, quad-core CPUs with 2 sockets/server, 40 physical servers per rack and 20 racks of servers, this
would drive the forwarding table requirements of the network as follows:
arista.com
White Paper
With Rapid Per VLAN Spanning Tree (RPVST), the switch maintains multiple independent instances of spanning tree (for each VLAN),
sending/receiving BPDUs on ports at regular intervals and changing the port state on physical ports from Learning/Listening/
Forwarding/Blocking based on those BPDUs. Managing a large number of non-synchronized independent instances presents a scale
challenge unless there is careful design of VLAN trunking. As an example, trunking 4K VLANs on a single port results in the state of
each VLAN needing to be tracked individually.
Multiple Spanning Tree Protocol (MSTP) is preferable to RPVST as there is less instances of the spanning tree protocol operating and
moving physical ports between states can be done in groups. Even with this improvement layer 2 logical port count numbers still
need to be managed carefully.
The individual scale characteristics of switches participating in Spanning Tree varies but the key points to factor into a design are:
• The number of STP Logical Ports supported on a given switch (this is also sometimes referred to as the number of VlanPorts).
• The number of instances of Spanning Tree that are supported if RPVST is being used.
We would always recommend a layer 3 ECMP design with VXLAN used to provide an overlay to stretch layer 2 over layer 3 over a large layer
2 design with Spanning Tree. Designs with layer 3 and VXLAN provide the most flexibility, greatest scale and traffic optimization as well as
the smallest failure domain, most optimal table/capacity utilization and fastest convergence times.
Figure 9: Spine/Leaf network design for 1G attached nodes using Arista 7504E/7508E spine
switches (maximum scale with 2 switches) with uplinks at 10G
arista.com
White Paper
Spine/Leaf Design 10G Nodes @ 3:1 Oversubscription Using 2 Spine Arista 7500 Series
Figure 10: Spine/Leaf network design for 10G attached nodes @ 3:1 oversubscription using Arista
7504E/7508E spine switches (maximum scale with 2 switches) with uplinks at 10G
Spine/Leaf Design 10G Nodes Non-Oversubscription Using 2 Spine Arista 7500 Series
Figure 11: Spine/Leaf network design for 10G attached nodes non-oversubscribed using Arista
7504E/7508E spine switches (maximum scale with 2 switches) with uplinks at 10G
These topologies can all be built at layer 2 or layer 3. If the designs are layer 2, MLAG provides an L2 network that runs active/active
with no blocked links, which requires an MLAG peer-link between the spine switches.
It may also be desirable to use MLAG on the leaf switches to connect servers/storage in an active/active manner. In this case, a pair of
leaf switches would be an MLAG pair and would have an MLAG peer-link between them. The MLAG peer-link can be a relatively small
number of physical links (at least 2) as MLAG prioritizes network traffic so that it remains local to a switch for dual-attached devices.
Figure 12: Arista 7504E/7508E Spine 4-way ECMP to Arista 64-port 10G Leaf switches @ 3:1 Oversubscription
arista.com
White Paper
Figure 13: Arista 7504E/7508E Spine 8-way ECMP to Arista 64-port 10G Leaf switches @ 3:1 Oversubscription
Figure 14: Arista 7504E/7508E Spine 16-way ECMP to Arista 64-port 10G Leaf switches @ 3:1 Oversubscription
The following diagrams demonstrate how a 1G server design scales with 4-way ECMP (each leaf switch has 4x10G uplinks for 48x1G
server/storage connectivity):
Figure 15: Arista 7504E/7508E Spine 4-way ECMP to Arista 48x10G Leaf switches @ 1.2:1 Oversubscription
The same design principles can be applied to build a 10G network that is non-oversubscribed. The network size can evolve over
time (pay as you grow) with a relatively modest up-front capex investment:
Figure 16: Arista 7504E/7508E Spine 4-way ECMP to Arista 64-port 10G Leaf switches non-oversubscribed
arista.com
White Paper
Figure 17: Arista 7504E/7508E Spine 8-way ECMP to Arista 64-port 10G Leaf switches non-oversubscribed
Figure 18: Arista 7504E/7508E Spine 16-way ECMP to Arista 64-port 10G Leaf switches non-oversubscribed
Figure 19: Arista 7504E/7508E Spine 32-way ECMP to Arista 64-port 10G Leaf switches non-oversubscribed
The following diagrams show the maximum scale using 40G uplinks from leaf to spine in a layer 3 ECMP design for 3:1
oversubscribed 10G nodes:
Figure 20: Arista 7504E/7508E Spine 4-way ECMP to Arista 48x10G + 4x40G Leaf switches @ 3:1 Oversubscription
arista.com
White Paper
QSFP+ transceivers are used for 40G connectivity. These also allow breaking out a single physical port as either 1x40G or 4x10G:
arista.com
White Paper
Embedded 100G Optics are used on Arista 7500E-72S-LC and 7500E-12CM-LC linecard modules to provide industry-standard
100GBASE-SR10, 40GBASE-SR4 and 10GBASE-SR ports without requiring any transceivers. This provides the most cost effective and
highest density 10/40/100G connectivity in the industry. Each port mates with a standard MPO/MTP24 cable (12 fiber pairs on the one
cable/connector) and provides incredible configuration flexibility enabling one port to operate as any of:
• 1 x 100GBASE-SR10 3
• x 40GBASE-SR4
• 2 x 40GBASE-SR4 + 4 x 10GBASE-SR
• 1 x 40GBASE-SR4 + 8 x 10GBASE-SR
• 12 x 10GBASE-SR
arista.com
White Paper
These ports can be used with OM3/OM4 MMF supporting distances of 300m (OM3) and 400m (OM4). MPO to 12xLC patch cables
break out to 12 x LC connectors for connectivity into SFP+.
Arista AgilePorts on some switches can use a group of 4 or 10 SFP+ ports to create an industry-standard 40GBASE-SR4 or 100GBASE-
SR10. This provides additional flexibility in how networks can grow and evolve from 10G to 40G and 100G while providing increased
flexibility in terms of distances supported.
QSFP100 transceivers are optimized for 100G data center reach connectivity. There are also options for breakout to 4x25G/2x50G on
some switch platforms.
Table 8: QSFP100 Transceiver Options
CFP2 transceivers are optimized for longer distance connectivity between data centers in the same metro region.
arista.com
White Paper
This innovative structure enables EOS to deliver unparalleled programmatic access including:
• Arista EOS SDK - exposing the industry’s first programmable and extensible network platform with deployed customer
examples
• EOS APIs - JSON over HTTP/HTTPS API with fully functional access to the underlying system
• Technology partner integration with Splunk, F5, Palo Alto, Nuage, VmWare’s and others through Arista’s Open API’s
• Arista CloudVision - Arista is extending the EOS platform by providing a network-wide multi-function controller as a single API
for real-time provisioning, orchestration and integration with multi-vendor controllers. CloudVision abstracts the state of all
the switches in the network by providing a centralized instance of Sysdb, which provides a reliable and scalable approach to
network wide visibility.
Arista’s Universal Cloud Network designs are underpinned on a number of foundation features of Arista’s award-winning Extensible
Operating System:
To a neighboring device, MLAG behaves the same as standard link aggregation (LAG) and can run either with Link Aggregation
Control Protocol (LACP) (formerly IEEE 802.3ad, more recently IEEE 802.1AX-2008) or in a static ‘mode on’ configuration.
The MLAG pair of switches synchronize forwarding state between them such that the failure of one node doesn’t result in any
disruption or outage as there are no protocols to go from standby to active, or new state to learn as the devices are operating in
active/active mode.
An extension to ZTP, Zero Touch Replacement (ZTR) enables switches to be physically replaced, with the replacement switch picking
up the same image and configuration as the switch it replaced. Switch identity and configuration aren’t tied to switch MAC address
but instead are tied to location in the network where the device is attached (based on LLDP information from neighboring devices).
arista.com
White Paper
While a hardware failure and RMA is not likely to be a common event, ZTR means that in this situation the time-to-restoration is
reduced to the time it takes for a new switch to arrive and be physically cabled, and is not dependent on a network engineer being
available to provide device configuration, physically in front of the switch with a serial console cable.
VM Tracer
As virtualized data centers have grown in size, the physical and virtual networks that support them have also grown in size and
complexity. Virtual machines connect through virtual switches and then to the physical infrastructure, adding a layer of abstraction
and complexity. Server side tools have emerged to help VMware administrators manage virtual machines and networks, however
equivalent tools to help the network administrator resolve conflicts between physical and virtual networks have not surfaced.
Arista VM Tracer provides this bridge by automatically discovering which physical servers are virtualized (by talking to VMware
vCenter APIs), what VLANs they are meant to be in (based on policies in vCenter) and then automatically apply physical switch port
configurations in real time with vMotion events. This results in automated port configuration and VLAN database membership and
the dynamic adding/removing VLANs from trunk ports.
VM Tracer also provides the network engineer with detailed visibility into the VM and physical server on a physical switch port while
enabling flexibility and automation between server and network teams.
VXLAN
VXLAN is a multi-vendor industry-supported network virtualization technology that enables much larger networks to be built
at layer 2 without the inherent scale issues that underpin large layer 2 networks. It uses a VLAN-like encapsulation technique to
encapsulate layer 2 Ethernet frames within IP packets at layer 3 and as such is categorized as an ‘overlay’ network. From a virtual
machine perspective, VXLAN enables VMs to be deployed on any server in any location, regardless of the IP subnet or VLAN that the
physical server resides in.
VXLAN provides solutions to a number of underlying issues with layer-2 network scale, namely:
• Potential ability to localize flooding (unknown destination) and broadcast traffic to a single site
• Enables large layer 2 networks to be built without every device having to see every other MAC address
VXLAN is an industry-standard method of supporting layer 2 overlays across layer 3. As multiple vendors support VXLAN there are
subsequently a variety of ways VXLAN can be deployed: as a software feature on hypervisor-resident virtual switches, on firewall and
load-balancing appliances and on VXLAN hardware gateways built into L3 switches. Arista switch platforms with hardware VXLAN
gateway capabilities include: all Arista switches that have the letter ‘E’ or ‘X’ (Arista 7500E Series, Arista 7280E, Arista 7320X Series,
Arista 7300E Series, Arista 7060X Series, Arista 7050X Series) and Arista 7150 Series.
These platforms support unicast-based hardware VXLAN gateway capabilities with orchestration via Arista CloudVision, via open
standards-based non-proprietary protocols such as OVSDB or via static configuration. This open approach to hardware VXLAN
gateway capabilities provides end users choice between cloud orchestration platforms without any proprietary vendor lock-in.
LANZ
Arista Latency Analyzer (LANZ) enables tracking of network congestion in real time before congestion causes performance issues.
Today’s systems often detect congestion when someone complains, “The network seems slow.” The network team gets a trouble
ticket, and upon inspection can see packet loss on critical interfaces. The best solution historically available to the network team has
been to mirror the problematic port to a packet capture device and hope the congestion problem repeats itself.
arista.com
White Paper
Now, with LANZ’s proactive congestion detection and alerting capability both human administrators and integrated applications
can:
Once eAPI is enabled, the switch accepts commands using Arista’s CLI syntax, and responds with machine-readable output and
errors serialized in JSON, served over HTTP or HTTPS. The simplicity of this protocol and the availability of JSON-RPC clients across
all scripting languages means that eAPI is language agnostic and can be easily integrated into any existing infrastructure and
workflows and can be utilized from scripts either on-box or off-box.
Arista ensures that a command’s structured output will always remain forward compatible for multiple future versions of EOS
allowing end users to confidently develop critical applications without compromising their ability to upgrade to newer EOS releases
and access new features.
Open Workload
OpenWorkload is a network application enabling open workload portability, automation through integration with leading
virtualization and orchestration systems, and simplified troubleshooting by offering complete physical and virtual visibility.
• Seamless Scaling - full support for network virtualization, connecting to major SDN controllers
• Integrated Orchestration - interfaces to VMware NSX™, OpenStack, Microsoft, Chef, Puppet, Ansible and more to simplify
provisioning
• Workload Visibility to the VM-level, enabling portable policies, persistent monitoring, and rapid troubleshooting of cloud
networks.
Designed to integrate with VMware, OpenStack and Microsoft OMI, Arista’s open architecture allows for integration with any
virtualization and orchestration system.
Network Telemetry
Network Telemetry is a new model for faster troubleshooting from fault detection to fault isolation. Network Telemetry streams
data about network state, including both underlay and overlay network statistics, to applications from Splunk, ExtraHop, Corvil and
Riverbed. With critical infrastructure information exposed to the application layer, issues can be proactively avoided.
arista.com
White Paper
At the core of EOS is the System Data Base, or SysDB for short. SysDB is machine generated software code based on the object
models necessary for state storage for every process in EOS. All inter-process communication in EOS is implemented as writes to
SysDB objects. These writes propagate to subscribed agents, triggering events in those agents. As an example, when a user-level
ASIC driver detects link failure on a port it writes this to SysDB, then the LED driver receives an update from SysDB and it reads the
state of the port and adjusts the LED status accordingly. This centralized database approach to passing state throughout the system
and the automated way the SysDB code is generated reduces risk and error, improving software feature velocity and provides
flexibility for customers who can use the same APIs to receive notifications from SysDB or customize and extend switch features.
Arista’s software engineering methodology also benefits our customers in terms of quality and consistency:
• Complete fault isolation in the user space and through SysDB effectively convert catastrophic events to non-events. The system
self-heals from more common scenarios such as memory leaks. Every process is separate, with no IPC or shared memory fate-
sharing, endian-independent, and multi-threaded where applicable.
• No manual software testing. All automated tests run 24x7 and with the operating system running in emulators and on hardware
Arista scales protocol and unit testing cost effectively.
• Keep a single system binary across all platforms. This improves the testing depth on each platform, improves time-to-market,
and keeps feature and bug resolution compatibility across all platforms.
EOS provides a development framework that enables the core concept of Extensibility. An open foundation, and best-in-class
software development models deliver feature velocity, improved uptime, easier maintenance, and a choice in tools and options.
Some examples of how people customize and make use of Arista EOS extensibility include:
• Want to back up all log files every night to a specific NFS or CIFS share? Just mount the storage straight from the switch and use
rsync or rsnapshot to copy configuration files
• Want to store interface statistics or LANZ streaming data on the switch in a round-robin database? Run MRTG right on the
switch.
• Like the Internet2 PerfSonar performance management apps? Just run them locally.
• Want to run Nessus to security scan a server when it boots? Create an event-handler triggered on a port coming up.
• Using Chef, Puppet, CFEngine or Sprinkle to automate your server environment? Use any or all of these to automate
configuration and monitoring of Arista switches too.
• Want to PXE boot servers straight from the switch? Just run a DHCP and TFTP server right on the switch.
arista.com
White Paper
If you’re not comfortable running code on the same Linux instance as what EOS operates on we allow guest OSs to run on the switch
via KVM built in. You can allocate resources (CPU, RAM, vNICs) to Guest OSs and we ship switches with additional flash storage via
enterprise-grade SSDs.
Figure 21: Arista EOS foundation features and cloud network scalability
Figure 22: Arista Cloud network designs: Single tier SplineTM and Two Tier Spine/Leaf, 100 to 100,000+ ports
arista.com
White Paper
Conclusion
Arista’s cloud network designs take the principles that have made cloud computing compelling (automation, self-service
provisioning, linear scaling of both performance and economics) and combine them with the principles of Software Defined
Networking (network virtualization, custom programmability, simplified architectures, and more realistic price points) in a way that
is neither proprietary nor a vendor lock-in.
This combination creates a best-in-class software foundation for maximizing the value of the network to both the enterprise and
service provider data center: a new architecture for the most mission-critical location within the IT infrastructure that simplifies
management and provisioning, speeds up service delivery, lowers costs and creates opportunities for competitive differentiation,
while putting control and visibility back in the hands of the network and systems administrators.
Copyright © 2016 Arista Networks, Inc. All rights reserved. CloudVision, and EOS are registered trademarks and Arista Networks
is a trademark of Arista Networks, Inc. All other company names are trademarks of their respective holders. Information in this
document is subject to change without notice. Certain features may not yet be available. Arista Networks, Inc. assumes no
responsibility for any errors that may appear in this document. Sep 2015 02-0024-01
arista.com