InterSAN White Paper

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

SAN interconnection

between DataCenters on Carrier Class


Ethernet networks
contents
1. introduction 3
2. the need for SAN extension 3
2.1 storage network applications 3
2.2 SAN extension applications 4
2.2.1 disaster recovery plan 4
2.2.2 business continuity 4
2.2.3 data mirroring 4
2.2.4 content distribution 4
2.2.5 IT infrastructure consolidation 5
2.3 SAN extension technical requirements 6
2.3.1 long distance 6
2.3.2 minimal delays 6
2.3.3 minimal Packet Loss Rate (PLR) 6
2.3.4 high availability 6
2.3.5 flexibility 6
2.3.6 security 7
3. challenges for existing SAN extension solutions 7
3.1 DWDM 7
3.2 SONET / SDH Networks 7
3.3 IP Network 7
3.4 a market segment awaiting a solution 9
4. why is carrier-class Ethernet the solution adapted to Enterprise
SAN extension? 10
4.1 introduction to carrier-class fiber Ethernet networks 10
4.1.1 end-to-end Quality of Service (QoS) 10
4.1.2 sub-50ms protection 10
4.1.3 operation, administration and maintenance 10
4.1.4 scalability 11
4.2 a network adapted to SAN extensions 11
4.3 running FCIP and iSCSI over carrier-class fiberEthernet networks 11
5. conclusion 13
6. acronyms 14
1. introduction
Many enterprises now rely heavily on information and their a need to have rapid access to the data base and to applica-
data to run their business. So they create data centers tions hosted in the data centers, each covering a geographic
enabling them to handle, store, share and update the data zone, the users of each zone accessing the data base
and information they need efficiently and reliably. Storage and the applications from their local data center. In order to
Area Networks (SANs) are one of the key technologies for guarantee the integrity and coherence of the data, real-time
developing data centers. They are increasingly used and synchronization of data between all the data centers must
are deployed by both enterprises and service providers to be ensured. SAN extension is an efficient method enabling
interconnect storage devices and application servers. Since enterprises or their service providers to interconnect their
data centers are critical property for their owners or the data centers for purposes of backup and/or real-time
service providers to which they are outsourced, and the synchronization. This White Paper reviews the applications
main guarantors of business activity continuity, they must of SAN extension as well as existing SAN extension solu-
be able to resist natural disasters, unexpected power tions, lists the technical parameter to be taken into account
outages or terrorist attacks. This is why a redundant data when choosing a SAN extension technology intended for
center (i.e., a data center dedicated to information system a specific SAN application, and proposes a new SAN
backup) is often set up on a site geographically distant extension solution that meets enterprise technical and
from that of the main data center. So enterprises or service financial needs. This new SAN extension solution is based
providers have to interconnect their SANs. Interconnection on the new generations of carrier class Ethernet fiber
of these SANs is also called SAN extension. There is also optic networks.

2. the need for SAN extension


2.1 storage network applications
In this new information era, data has become vital enterprise Storage (DAS). So mainframe computing power can be
property. Globalization of the economy also means that reserved for data processing for on-demand, business-critical
data must be accessible at any time and from anywhere applications. Storage devices can also communicate with
reliably, securely, efficiently, and cost-effectively. Many each other across the SAN for performing such operations as
enterprises have begun to consolidate their IT resources data backup, synchronization or mirroring without involving
and concentrate their efforts on data acquisition, handling the hosts or servers. Most SANs comprise hardware equip-
and storage. To achieve this, they have built SANs to resolve ped with a Fiber Channel (FC) interface. FC is a protocol
the problemsresulting from the enormous expansion of their adapted to transfer of large data blocks at very high rates.
storage peripherals.
SANs can also be used to virtualize storage resources,
A SAN is used to interconnect hosts (or servers) to storage enabling these resources to be utilized more efficiently asa
resources and storage resources among themselves. single big logical disk, regardless of the physical location
It provides shared and more efficient access to stored data of each system.
without the transfers consuming more cycles on the system
clock for the host mainframes (or in certain cases, consuming SAN also helps enterprises reduce the overall operating costs

less) compared to solutions where the storage systems are of their IT infrastructure by consolidating the sophisticated

directly connected to hosts or servers, i.e., Directly Attached IT equipment into a few locations such as headquarters


and data centers. This consolidation eliminates the need for putting servers and storage resources in every branch or remote
office and maintaining them continuously, thus eliminating the costs generated.

2.2 SAN extension applications


Enterprises usually choose to build two or more geographically dispersed data centers with a SAN in each of them to ensure
high data availability, load sharing, and data backup. The interconnection of these data centers is one of the key considerations
of many IS managers.

There are many SAN extension applications. Here are a few examples.

2.2.1 disaster recovery plan

Enterprises whose data and information constitute their core business must protect and store the data and information for
quite a long time, notably to respect regulations. This is data archiving. The threat of natural disasters, terrorism, fire and other
unexpected accidents, enterprises must have a Disaster Recovery Plan (DRP). To this end, multiple replications of data and
information must be implemented and stored at geographically dispersed sites. The computer backup center should be as
far as possible from the main data center (from 5 to 100 km), so large blocks of data must be transferred from the main data
center to the disaster recovery center over a long distance. Without an adapted SAN extension solution, this distance is
generally less than 100 km.

2.2.2 business continuity

For a disaster recovery plan, a backup data center is built in addition to the main data center. But for business continuity, the
main objective is accessing data 24 x 7. When the main data center fails, applications and access for remote users must
be able to switch to the backup data center instantly so that business is not interrupted. Business continuity requires real-time
server synchronization and data replication.

2.2.3 data mirroring

Most Internet businesses or ISPs use localized servers and databases to speed up the response time of some of their appli-
cations. Data consistency and integrity between localized databases and the central database must be maintained constantly
so that there is no difference between accessing a local database and accessing the central database. In order to ensure these
operations in real time, data mirroring is implemented to enable real-time synchronization of the different databases.

2.2.4 content distribution

This application allows a central location to distribute its content to the remote sites. For example, video on demand service
needs to distribute video content (usually bulky) from the central content library to the remote video server cluster across
an entire region. Video servers are usually equipped with FC interfaces that connect hundreds of terabytes of local storage
resources.


2.2.5 IT infrastructure consolidation

Many enterprises are taking measures to consolidate their efficient SAN extension solution that interconnects their data
sophisticated IT equipment such as storage resources and centers and thus offers many benefits, such as improved
databases, application servers, SAN switches and routers storage unit access performance, better storage space
in their headquarters or data centers. This consolidation utilization thanks to virtualization, and enhanced availability.
helps enterprises to significantly reduce both CapEx and Figure 1 is an illustration of this. However, this consolida-
OpEx and improve productivity. Instead of using local tion of IT resources means higher risk during an outage.
servers with internal and external DAS on every site, enter- Since the storage units are centralized, a loss of data
prises now build a SAN that connects high capacity storage at the central site may then entail a stoppage of activity for
units located only at one or two sites. So remote sites have all the sites. When a consolidation policy has been adopted
only client PCs or workstations. This means that in client/ by an enterprise, protection of its data to a remote data
server model, remote sites only need to implement Presen- center is all the more important.
tation Layer functions such as GUI. Enterprises now need an

Figure 1. IT consolidation

5
2.3 SAN extension technical requirements
SAN extension applications can be divided into two categories. One is real-time data synchronization, and the other is data
backup. The first category of applications require very low delay, while the second category of applications require very high
bandwidth. We now detail SAN Extension features in terms of SLAs (Service Level Agreements).

2.3.1 long distance

To better resist unexpected accidents or natural disasters, different data centers are usually located on sites remote from each
other. The SAN interconnection technology must thus meet this need. The speed of propagation of light in optical fiber over
long distances results in lengthening of transmission delays, which affects the performance of delay-sensitive SAN applica-
tions. The speed of light in optic fiber being about 200 km/ms, the recommended rule of thumb is to use larger packet sizes
in order to reduce the time required to implement an operation on a storage system.

2.3.2 minimal delays

Due to the credit-based flow control mechanism of the Fiber Channel protocol, the transmission delay between two SAN
islands must be very low so that high throughput data transfer can be maintained. The rule of thumb provides the following
recommendation: one credit corresponds to an interconnection distance of 1.5 km. The maximum interconnection distance
permitted will thus theoretically depend on the number of credits available on the SAN network edge equipment. The processing
time engendered by SAN equipment, the time taken to cross the transmission equipment and the propagation delay in the
fiber optic cable are totaled. This total time must be minimal to avoid degrading the performance level of the applications. For
businesses, low transmission delay is crucial for data integrity and consistency in real-time applications such as remote data
mirroring. Moreover, the latency introduced by SAN interconnection must be minimal so that end users will experience little
perceptible difference when accessing the same data that is stored on different sites.

2.3.3 minimal Packet Loss Rate (PLR)

The SAN transport layer is usually based on the Fiber Channel (FC) protocol, whose performance is very sensitive to PLR. High
PLR directly affects the quality of data block transfer. A low PLR requires guaranteed bandwidth. Currently, SAN FC equipment
uses 1-Gbps, 2-Gbps or 4-Gbps interfaces, but should soon scale to 10 Gbps.

2.3.4 high availability

Any failure in the interconnection of SAN islands will severely jeopardize the integrity of data and the continuity of business if the
failure cannot be corrected rapidly. There is a risk of losing many data blocks during the service outage. Moreover, a prolonged
outage will automatically result in a network reconfiguration of SAN islands, which is an inherent feature of the Fiber Channel
protocol. The interconnection technology must offer core network protection and thus guarantee a 50-ms switchover time
supported by Carrier Class networks.

2.3.5 flexibility

Multiple SANs can be interconnected to form a large Fabric, while respecting the limits of the Fiber Channel protocol (maxi-
mum of 239 switches) through different topologies such as point-to-point, ring or mesh. SAN interconnection should support
these different topologies.


2.3.6 security

The data transmitted between storage systems is obviously from the others traffic. It must offer protection against
confidential and this confidentiality must be protected any spoofing techniques and Denial of Service (DoS)
during transfer from one site to another. The SAN extension attacks.
technology must ensure that one customers traffic is separated

3. challenges for existing SAN


extension solutions
3.1 DWDM allows SONET/SDH frames to carry Fiber Channel (FC) traffic
transparently across a SONET/SDH network. However,
Over the past few years, DWDM (Dense Wavelength Division 2.5-Gbps or 10-Gbps leased lines are still too expensive for
Multiplexing) has been the de-facto solution for SAN many enterprises, and this is a brake on SDH/SONET SAN
extension. This solution has the advantages of high extension development. But similarly to WDM solutions,
bandwidth, low latency, and transparency to upper layer it should be mentioned that the price of these configurations
protocols (such as FC, FICON and Gigabit Ethernet). But is decreasing.
the costs of DWDM equipment and use of fiber dedicated
to the Customer means that the SAN interconnection
offerings proposed up until now by operators are reserved 3.3 IP Network
for a limited number of customers for which the budget
To lower costs and extend SAN over theoretically unlimited
corresponds to the stakes involved in accessing the data to
distances, IP SAN solutions have been developed. They
be protected. It should be mentioned that the price of these
allow using an IP network, private or public (Internet), to carry
configurations is decreasing.
data storage flows between different sites. At the border of
each SAN, a Fiber Channel over IP gateway is responsible

3.2 SONET/SDH Networks for encapsulating before transmission the FC frame


within a IP packet that is itself inserted into an Ethernet frame.
SONET/SDH is another solution for SAN extension. SAN FC over IP has two protocols, FCIP and iFCP. For these two
extension can be offered on 155-Mbps links, but everything protocols, encapsulation of a FC frame within an Ethernet
depends on the volume of information to be backed up. frame is identical with the exception of a few special fields.
In France, some companies use SMHD backup (a France Since few providers have implemented iFCP, in the rest
Telecom offering based on SDH for interconnecting sites at of this document we will illustrate only FCIP applications.
very high throughputs). For the higher throughputs required
FCIP encapsulation is defined in Figure 2.
by higher volumes, new generation SDH systems having GFP
Figure 3 shows how FCIP interconnects two SAN islands.
(Generic Framing Procedures) are needed. This standard


Figure 2. Encapsulation FCIP

Ethernet SCSI SCSI


IP TCP FCIP FC CRC
header Commande Data

Figure 3. FCIP interconnection of SANs via an IP network

However, there is another IP SAN native transport protocol called Internet SCSI (iSCSI) that encapsulates SCSI data and
commands directly into Ethernet/IP packets. TCP sessions (called iSCSI sessions) are set up between an iSCSI initiator (e.g.,
a server) and an iSCSI target (e.g., a storage unit) so that data blocks can be transferred reliably over TCP/IP networks.
The iSCSI protocol can be generated by iSCSI pilots installed on Gigabit Ethernet interface network cards located in the
servers and storage units. There are also specialized iSCSI cards designed to reduce the number of server system clock cycles
required for encapsulation of the SCSI command.

Figure 4 shows the encapsulation of a SCSI data unit and a command into Ethernet/IP packets.
Figure 5 shows the SAN interconnection for iSCSI using the IP network.

Figure 4. iSCSI encapsulation

Ethernet SCSI SCSI


IP TCP iSCSI CRC
Header Commande Data

Figure 5. iSCSI interconnection via an IP network

The IP SAN extension solution is much cheaper than those based on DWDM and SONET/SDH systems, but it cannot
guarantee SLA due to the connectionless nature of IP networks. The delay and jitter of an IP network is much higher than that
of DWDM and SONET/SDH networks, and the failure recovery time of an IP network is far higher than 50ms and is non-deter-
ministic. If a public IP network such as Internet is used as the transport network, data security is also a major problem.

IP SAN is thus better adapted to small companies that do not have large amounts of data to transfer and that do not have
stringent performance requirements in terms of availability and transit delays. So it is an economical, entry-level solution.

8
3.4 a market segment awaiting a solution
Figure 6 shows the positioning of these three SAN extension synchronization, protection and fast business recovery after
solutions. Note that there is no product currently adapted failure but whose budget is lower than that required for
to enterprises whose requirement are higher than the IP- DWDM and SONET/SDH solutions. The carrier-class fiber
based transport network solution for real-time data access, Ethernet solution is intended for these businesses.

Figure 6. SAN extension solutions positioning

9
4. why is carrier-class Ethernet
the solution adapted to
Enterprise SAN extension?
4.1 introduction to carrier-class fiber Ethernet networks
A carrier-class fiber Ethernet network does not change the access control layer to the support or MAC (Media Access Control)
layer, but adds four major enhancements.

4.1.1 end-to-end Quality of Service (QoS)

Carrier-class fiber Ethernet enables service providers to deliver a guaranteed minimum rate for each traffic flow. At the access,
the traffic may be classified physically (e.g., based on interface) or logically (e.g., based on customer VLAN or type of appli-
cation). This type of network guarantees minimal latency and jitter for traffic very sensitive to transmission delays. It is better
adapted to data traffic while offering a level of QoS close to that of a SDH/SONET leased line. A carrier-class fiber Ethernet
efficiently manages network congestion in order to maintain the rates for all flows subject to this congestion. This feature is
important both for real-time data synchronization and data backup since it avoids any packet loss. Furthermore, Multi-Protocol
Label Switching (MPLS) is used in the core network gives access to advanced traffic management functions used to set up the
best path with guaranteed bandwidth in the network.

Moreover, the Ethernet protocol offers interfaces at 10 Gbps. 10-Gigabit Ethernet (10 GE) has become a mature and affordable
technology, which can provide all the bandwidth that SAN extension needs, whether via DWDM systems or via fiber Ethernet
Networks in the near future.

4.1.2 sub-50ms protection

A carrier-class fiber Ethernet achieves sub-50ms protection (in the core network as a standard feature and at the access as an
option) by utilizing MPLS Fast Reroute. Unlike a SONET/SDH network, this protection system has the advantage of functioning
in any network topology: point-to-point, ring, meshed.

4.1.3 operation, administration and maintenance

A large public data transport network adapted to SAN extension covers a wide geographic zone and supports many users. In
this network, trouble-shooting gets more difficult and OAM (Operation, Administration, Maintenance) becomes critical. Network
operators have to pinpoint problems and reduce service downtime. Traditional Ethernet had few OAM functions. Now, a lot
of work by standardization bodies (IEEE 802.3 Working Group and the Metro Ethernet Forum) plans to add OAM functions to
this protocol. Some vendors have already integrated pre-standard OAM functions such as Ethernet loop-back, BER detection,
SLA Measurements, and alarms on critical problems into their products.

10
4.1.4 scalability in SONET/SDH. Connections provide natural separation
between customer traffic and thus ensure data security and
Enterprise-class Ethernet networks have intrinsic limitations confidentiality for enterprise data. Unlike an IP network that
when used as a public transport network. These limitations is connectionless, a carrier-class fiber Ethernet network ta-
include the number of VLANs configurable per network, the kes advantage of this connection-oriented feature to ensure
number of MAC addresses that must be learned and stored data security and confidentiality so that enterprises do not
in the device, and the long and non-deterministic conver- have to use expensive and inefficient encryption systems for
gence time of the STP. their data traffic.

(Spanning Tree Protocol) that was initially designed to avoid 4. High availability. On a carrier-class fiber Ethernet network,
the loops in an Ethernet network but is used by some if a network link failure occurs, the fail-over time is less than
vendors as a core network protection system. To resolve 50 ms thanks to flow protection mechanisms (in the core
these scalability issues, MPLS is widely used in carrier-class network and at the access if it is secured by loop) introdu-
fiber Ethernet networks and has become a mature and very ced by MPLS Fast Reroute.
scalable solution.
5. Long distance. SAN extension service can thus be provi-
ded both over a metropolitan area and between two areas
4.2 a network adapted to by extending the carrier-class fiber Ethernet network on in-

SAN extensions terconnections in DWDM mode.

6. Cost-effectiveness. A carrier-class fiber Ethernet network


Carrier-class fiber Ethernet networks have features that ena-
is now one of the only technologies that provides affordable
ble them to support SAN extension particularly well. These
high bandwidth (at least 1 Gbps to enterprise customers).
are:

1. Guaranteed bandwidth without packet loss. Carrier-class


fiber Ethernet networks supports TE (Traffic Engineering), 4.3 running FCIP and iSCSI
which can find the best path with the required bandwidth in over carrier-class
the network between two data centers with SAN extension. fiberEthernet networks
This bandwidth is allocated and reserved throughout the life
of the SAN extension service. For SAN extension service, As explained briefly in section 3.3, the traffic associated with
network operators can easily configure the network so that data storage can be encapsulated in IP packets, which are
it never drops storage traffic packets. themselves encapsulated in Ethernet frames via a FC over
IP gateway. The information in the Ethernet header trans-
2. Minimal delay and jitter. Carrier-class fiber Ethernet porting the FC flows is enough to enable the network to
networks function on OSI Layer 2 and introduce very low map the frames. The network access equipment is capable
delay and jitter compared with IP networks, which are of associating several types of traffic, including traffic as-
based on OSI Layer 3. Within a typical metropolitan carrier- sociated with storage, to connections enabling the desired
class fiber Ethernet network, maximum size Ethernet fra- sites to be reached.
mes (1518-byte) are transmitted in less than a millisecond
and with less than 0.1 ms of jitter. Moreover, jumbo frames Figure 7 shows a typical SAN extension using a FCIP ga-
can be used to improve performance. teway over a carrier-class fiber Ethernet network and Figure
8 shows this same extension using the iSCSI protocol over
3. Security. Carrier-class fiber Ethernet networks based a carrier-class fiber Ethernet network. As can be seen, in
on a MPLS core network are connection oriented. These addition to SAN extension service, other data transport ser-
connections are similar to wavelengths in DWDM, to VCC vices such as Layer 2 VPN (Ethernet Private LAN Service or
(Virtual Channel Connection) in ATM (Asynchronous Trans- E-LAN service) can be delivered through the same access
fer Mode) or to TDM (Time Division Multiplexing) channels

11
device and over the same carrier-class fiber Ethernet network. Compared with Figure 1, LAN-to-LAN and SAN-to-SAN are
now consolidated into a single (and common) carrier-class fiber Ethernet network, which helps further reduce the cost of SAN
extension service.

Figure 7. FCIP SAN Extension over carrier-class Ethernet

Figure 8. iSCSI Extension over carrier-class Ethernet

12
5. conclusion
SAN Extension service provides a new opportunity for than DWDM. The carrier-class fiber Ethernet network
enterprises wanting to set up a Disaster Recovery Plan as is the ideal solution to meet these needs both technically and
a complement to current SAN extension solutions based on economically. As a premium service, SAN Extension via a
DWDM, which are essentially accessible to large enterprises. carrier-class fiber Ethernet network rounds out the Ethernet
As the business of many SMEs becomes increasingly service portfolio, which already includes LAN-to-LAN and
dependant on data and information, many of these enter- Internet Access already available on the market. With SAN
prises have started using storage devices and SANs to over an Ethernet network, France Telecom solutions can
store and manage the ever-growing volume of their data. now cover all interconnection needs of enterprises, from
They want to share and access this data and information small to large, and generate more revenues.
efficiently. And they also have to build backup information
system centers to protect their data and information against
human error, terrorist attacks or natural disasters. A second
data center can also be used to share the traffic load and
thus improve response times. To interconnect their SAN
islands, enterprises require a more cost-effective solution

Figure 9. Carrier-class Ethernet: the new SAN extension solution for SMEs and large enterprises

1
6. acronyms
10 GE: 10 Gigabit Ethernet IT: Information Technology
ATM: Asynchronous Transfer Mode MAC: Media Access Control
BER: Bit Error Rate MPLS: Multi-Protocol Label Switch
CapEx: Capital Expenditure OAM: Operation Administration and Maintenance
CIR: Committed Information Rate OC: Optical Carrier
CRC: Cyclic Redundancy Check OpEx: Operation Expenditure

France Tlcom 6 place dAlleray 75505 Paris Cedex 15 - SA au capital de 10 412 239 188 euros - 380 129 866 RCS Paris - document non contractuel - juin 2006 www.aressy.com
DAS: Directly Attached Storage OSI: Open System Interface
DoS: Denial of Service PLR: Packet Loss Rate
DWDM: Dense Wavelength Division Multiplexing QoS: Quality of Service
EIR: Excess Information Rate SAN: Storage Area Network
E-LAN: Ethernet LAN SCSI: Small Computer Systems Interface
ESCON: Enterprise Systems Connection SDH: Synchronous Digital Hierarchy
FC: Fibre Channel SLA: Service Level Agreement
FCIP: Fibre Channel over IP SME: Small and Medium Enterprises
FICON: Fiber Channel-based Connection SONET: Synchronous Optical NETwork
GE: Gigabit Ethernet SSP: Storage Service Provider
GFP: Generic Framing Procedure STM: Synchronous Transfer Mode
GUI: Graphic User Interface TCP: Transmission Control Protocol
IFCP: Internet Fibre Channel Protocol TDM: Time Division Multiplexing
IP: Internet Protocol TE: Traffic Engineering
ISCSI: Internet Small Computer Systems Interface VCC: Virtual Circuit Connection
ISO: International Standard Organization VLAN: Virtual Local Area Network

To find out more, contact your sales advisor or go to


www.orange-business.com

14

You might also like