ASR9k Vs MX

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Juniper advantages of MX vs.

Cisco ASR 9K
The key advantages of the MX platform for Alabama Supercomputer Authority stem from its
ability to serve in mixed environments across Enterprise, Service Provider, Campus,
Datacenter, and Research and Education networks.
ASR9k can also serve in mixed environments across Mobile Backhaul, L2/Metro Aggregation,
CMTS Aggregation, Video Distribution & Services, Data Center Interconnect, DC WAN Edge,
WEB/OTT, SP Business Services, Residential broadband, SP Core, SP Edge, and very Large
Enterprise WAN1.
In the AREN network the MX routers are positioned as a core/edge/datacenter router rather
than in a hierarchical Layer 3 core, so this architectural flexibility will protect ASCs
investment for the future. In contrast, the ASR 9000 is a Service Provider core router and
will ultimately not provide the flexibility needed for the AREN network.
This is also not true (See the positioning note above).
1. Native Layer 2 switching is fully supported at high scale on the MX
platform. The Cisco ASR is a router and does not switch. The ASR relies on VPLS
or L2VPN to deliver Layer 2. While it may be preferred to use L2VPN or VPLS to
transport L2 across the backbone, there are compelling use cases for the datacenter
environment and research networks to have pure L2 switching. Juniper has seen a
need in Research networks to provide a transparent L2 service, with or without MAC
learning. Local VPLS and other methods of L2 transport place restrictions on
Broadcast, Unknown Unicast, and Multicast packets as well as other L2 control
protocols which is not desired in these use cases. This also plays into the next point
on SDN and Openflow.
This statement is misleading; ASR9k is a router, but can also do switching with the
use of EVC and bridging. This approach is better than the traditional switches,
because the traditional switches dont support more than 4000 vlans. On ASR9k you
can apply up to 4,000 non unique vlan tags per phy interface, and this way you can
scale to 64k unique layer two domains per line card. We basically allow the vlan tag
to be port significant and once the l2 frames leave the interface we put them in
bridges and mac switch them through the router, and egress the traffic out either
other phy interfaces or any manner of other methods. You can use the same vlan on
every single interface and every port can be a different customer if you wish and it
will never cause a conflict on 9k.
2. The MX is a key part of Junipers solution for Software Defined Networking
which will be important to delivering flexible cloud computing services for ARENs end
users. We are a key supplier to the Internet 2 backbone with the MX and have a
production load supporting Openflow 1.0 in the latest Junos release. Furthermore,
the MX works with Junipers Contrail SDN controller as an SDN gateway to the
datacenter network. The capability to bridge physical and virtual networks is
supported by the MX with VXLAN to VLAN interworking. Juniper has an open
approach to SDN, supporting the major hypervisor platforms (VMWare, KVM, Hyper-V)
and also providing an open source release of our SDN controller at

www.opencontrail.org.
ASR9k is also a key part of Ciscos SDN story. Cisco was the first vendor to do Vxlan
routing in hardware with the ASR9K, and we are fully SDN compliant with all open
standards. We support Openflow, BGP-LS, PCEP, OnePK, Netconf Yang, and XML
among other REST configuration methods. The ASR9K is compatible with
OpenDaylight and our own SDN controllers as well. Cisco is a Platinum member in the
OpenDaylight community and one of the largest contributors to the open source
project www.opendaylight.org. Cisco is members of the ETSI-NFV and ONF and is
active contributor to projects like OpenStack and Open vSwitch to accelerate the
introduction of new NFV products and services.

3. MX will provide a lower cost upgrade from 10G to 40G to 100G transport
capacity. The MPC3E line cards quoted to ASC support 10G, 40G, and 100G MIC
modules. The transport capacity can be upgraded by swapping out the MIC modules
and not adding additional line cards in new slots. In contrast, the ASR line cards that
support 10G and 40G do not support 100G, so new line cards will be needed as
transport speeds increase to 100G.
Cisco also has the ability to sell CIRBN Tomahawk based dense 10/40/100g Combo
ports with the MOD 200/400 cards. We also have 100g interfaces with 10x10
breakout cables which would allow CIRBN to not have to upgrade cards, but this is
not advantageous for CIRBN due to the scale of the solution today, we are confident
that the proposed Typhoon cards are sufficient to deliver high SLA services to CIRBN
customers. We have positioned the best solution for the price per port for CIRBN
based on the design requirements.

4. Juniper supports an open standards approach to MPLS services. Firstly,


Juniper freely supports both LDP and RSVP signalling for LSPs. Cisco does not allow
RSVP signalled LSPs, which then pushes the customer to deploy proprietary
extensions to LDP for resilient service connections. Secondly, Juniper adheres to EVPN standards for datacenter interconnect and disaster recovery services. In
contrast, Cisco has pushed proprietary OTV for Layer 2 stretch. Proprietary protocols
lead to problematic lock-in over the long term, just as ASC has seen with EIGRP.
Cisco also supports an open standards approach to MPLS services. The ASR 9000 also
uses LDP and RSVP-TE and we can signal LSP with RSVP. Also, I have tested this in a
customer lab myself for brocade interop. We also adhere to MEF standards for ELINE,
Elan, Eaccess, and ETREE. I am not sure where Juniper is getting that Cisco is
pushing OTV on ASR9k, infact ASR9k does not even support OTV. Cisco has taken a
very open approach and is abandoning proprietary protocols and has been for years.
For example, the basic EIGRP specifications were released to general public/IETF in
20132.
5. Junipers fabric architecture allows in-service upgrades. The MX960
redundancy scheme allows upgrade to future fabric generations while in-service. The

Cisco ASR requires both fabric blades to run at the same speeds and the upgrade to
higher capacity fabric is an out of service procedure.
a. Mx960 currently supports fabric capacity of 480 Gbps per slot with MPC5 cards
and Latest fabric. On ASR9010/9006 we support fabric capacity of 880 Gbps
per slot with RSP880s. The current quote include RSP440s supporting 440
Gbps of throughput per slot. We are working on making the upgrade from
RSP440 to RSP880 in-service. Currently we are targeting the next IOS-XR
upcoming release for this.
b. Also, within the ASR9k lineup, our ASR9912/22 series do support in-service
fabric upgrades today for very dense bandwidth requirements supporting upto
1.15Tbps of fabric throughput.
1. Cisco Live 2015 San Deigo (Cisco ASR 9000 Architecture)
http://d2zmdbbm9feqrf.cloudfront.net/2015/usa/pdf/BRKARC-2003.pdf
2. http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/enhancedinterior-gateway-routing-protocol-eigrp/qa_C67-726299.html

You might also like