InterSAN White Paper
InterSAN White Paper
InterSAN White Paper
less) compared to solutions where the storage systems are of their IT infrastructure by consolidating the sophisticated
directly connected to hosts or servers, i.e., Directly Attached IT equipment into a few locations such as headquarters
and data centers. This consolidation eliminates the need for putting servers and storage resources in every branch or remote
office and maintaining them continuously, thus eliminating the costs generated.
There are many SAN extension applications. Here are a few examples.
Enterprises whose data and information constitute their core business must protect and store the data and information for
quite a long time, notably to respect regulations. This is data archiving. The threat of natural disasters, terrorism, fire and other
unexpected accidents, enterprises must have a Disaster Recovery Plan (DRP). To this end, multiple replications of data and
information must be implemented and stored at geographically dispersed sites. The computer backup center should be as
far as possible from the main data center (from 5 to 100 km), so large blocks of data must be transferred from the main data
center to the disaster recovery center over a long distance. Without an adapted SAN extension solution, this distance is
generally less than 100 km.
For a disaster recovery plan, a backup data center is built in addition to the main data center. But for business continuity, the
main objective is accessing data 24 x 7. When the main data center fails, applications and access for remote users must
be able to switch to the backup data center instantly so that business is not interrupted. Business continuity requires real-time
server synchronization and data replication.
Most Internet businesses or ISPs use localized servers and databases to speed up the response time of some of their appli-
cations. Data consistency and integrity between localized databases and the central database must be maintained constantly
so that there is no difference between accessing a local database and accessing the central database. In order to ensure these
operations in real time, data mirroring is implemented to enable real-time synchronization of the different databases.
This application allows a central location to distribute its content to the remote sites. For example, video on demand service
needs to distribute video content (usually bulky) from the central content library to the remote video server cluster across
an entire region. Video servers are usually equipped with FC interfaces that connect hundreds of terabytes of local storage
resources.
2.2.5 IT infrastructure consolidation
Many enterprises are taking measures to consolidate their efficient SAN extension solution that interconnects their data
sophisticated IT equipment such as storage resources and centers and thus offers many benefits, such as improved
databases, application servers, SAN switches and routers storage unit access performance, better storage space
in their headquarters or data centers. This consolidation utilization thanks to virtualization, and enhanced availability.
helps enterprises to significantly reduce both CapEx and Figure 1 is an illustration of this. However, this consolida-
OpEx and improve productivity. Instead of using local tion of IT resources means higher risk during an outage.
servers with internal and external DAS on every site, enter- Since the storage units are centralized, a loss of data
prises now build a SAN that connects high capacity storage at the central site may then entail a stoppage of activity for
units located only at one or two sites. So remote sites have all the sites. When a consolidation policy has been adopted
only client PCs or workstations. This means that in client/ by an enterprise, protection of its data to a remote data
server model, remote sites only need to implement Presen- center is all the more important.
tation Layer functions such as GUI. Enterprises now need an
Figure 1. IT consolidation
5
2.3 SAN extension technical requirements
SAN extension applications can be divided into two categories. One is real-time data synchronization, and the other is data
backup. The first category of applications require very low delay, while the second category of applications require very high
bandwidth. We now detail SAN Extension features in terms of SLAs (Service Level Agreements).
To better resist unexpected accidents or natural disasters, different data centers are usually located on sites remote from each
other. The SAN interconnection technology must thus meet this need. The speed of propagation of light in optical fiber over
long distances results in lengthening of transmission delays, which affects the performance of delay-sensitive SAN applica-
tions. The speed of light in optic fiber being about 200 km/ms, the recommended rule of thumb is to use larger packet sizes
in order to reduce the time required to implement an operation on a storage system.
Due to the credit-based flow control mechanism of the Fiber Channel protocol, the transmission delay between two SAN
islands must be very low so that high throughput data transfer can be maintained. The rule of thumb provides the following
recommendation: one credit corresponds to an interconnection distance of 1.5 km. The maximum interconnection distance
permitted will thus theoretically depend on the number of credits available on the SAN network edge equipment. The processing
time engendered by SAN equipment, the time taken to cross the transmission equipment and the propagation delay in the
fiber optic cable are totaled. This total time must be minimal to avoid degrading the performance level of the applications. For
businesses, low transmission delay is crucial for data integrity and consistency in real-time applications such as remote data
mirroring. Moreover, the latency introduced by SAN interconnection must be minimal so that end users will experience little
perceptible difference when accessing the same data that is stored on different sites.
The SAN transport layer is usually based on the Fiber Channel (FC) protocol, whose performance is very sensitive to PLR. High
PLR directly affects the quality of data block transfer. A low PLR requires guaranteed bandwidth. Currently, SAN FC equipment
uses 1-Gbps, 2-Gbps or 4-Gbps interfaces, but should soon scale to 10 Gbps.
Any failure in the interconnection of SAN islands will severely jeopardize the integrity of data and the continuity of business if the
failure cannot be corrected rapidly. There is a risk of losing many data blocks during the service outage. Moreover, a prolonged
outage will automatically result in a network reconfiguration of SAN islands, which is an inherent feature of the Fiber Channel
protocol. The interconnection technology must offer core network protection and thus guarantee a 50-ms switchover time
supported by Carrier Class networks.
2.3.5 flexibility
Multiple SANs can be interconnected to form a large Fabric, while respecting the limits of the Fiber Channel protocol (maxi-
mum of 239 switches) through different topologies such as point-to-point, ring or mesh. SAN interconnection should support
these different topologies.
2.3.6 security
The data transmitted between storage systems is obviously from the others traffic. It must offer protection against
confidential and this confidentiality must be protected any spoofing techniques and Denial of Service (DoS)
during transfer from one site to another. The SAN extension attacks.
technology must ensure that one customers traffic is separated
Figure 2. Encapsulation FCIP
However, there is another IP SAN native transport protocol called Internet SCSI (iSCSI) that encapsulates SCSI data and
commands directly into Ethernet/IP packets. TCP sessions (called iSCSI sessions) are set up between an iSCSI initiator (e.g.,
a server) and an iSCSI target (e.g., a storage unit) so that data blocks can be transferred reliably over TCP/IP networks.
The iSCSI protocol can be generated by iSCSI pilots installed on Gigabit Ethernet interface network cards located in the
servers and storage units. There are also specialized iSCSI cards designed to reduce the number of server system clock cycles
required for encapsulation of the SCSI command.
Figure 4 shows the encapsulation of a SCSI data unit and a command into Ethernet/IP packets.
Figure 5 shows the SAN interconnection for iSCSI using the IP network.
The IP SAN extension solution is much cheaper than those based on DWDM and SONET/SDH systems, but it cannot
guarantee SLA due to the connectionless nature of IP networks. The delay and jitter of an IP network is much higher than that
of DWDM and SONET/SDH networks, and the failure recovery time of an IP network is far higher than 50ms and is non-deter-
ministic. If a public IP network such as Internet is used as the transport network, data security is also a major problem.
IP SAN is thus better adapted to small companies that do not have large amounts of data to transfer and that do not have
stringent performance requirements in terms of availability and transit delays. So it is an economical, entry-level solution.
8
3.4 a market segment awaiting a solution
Figure 6 shows the positioning of these three SAN extension synchronization, protection and fast business recovery after
solutions. Note that there is no product currently adapted failure but whose budget is lower than that required for
to enterprises whose requirement are higher than the IP- DWDM and SONET/SDH solutions. The carrier-class fiber
based transport network solution for real-time data access, Ethernet solution is intended for these businesses.
9
4. why is carrier-class Ethernet
the solution adapted to
Enterprise SAN extension?
4.1 introduction to carrier-class fiber Ethernet networks
A carrier-class fiber Ethernet network does not change the access control layer to the support or MAC (Media Access Control)
layer, but adds four major enhancements.
Carrier-class fiber Ethernet enables service providers to deliver a guaranteed minimum rate for each traffic flow. At the access,
the traffic may be classified physically (e.g., based on interface) or logically (e.g., based on customer VLAN or type of appli-
cation). This type of network guarantees minimal latency and jitter for traffic very sensitive to transmission delays. It is better
adapted to data traffic while offering a level of QoS close to that of a SDH/SONET leased line. A carrier-class fiber Ethernet
efficiently manages network congestion in order to maintain the rates for all flows subject to this congestion. This feature is
important both for real-time data synchronization and data backup since it avoids any packet loss. Furthermore, Multi-Protocol
Label Switching (MPLS) is used in the core network gives access to advanced traffic management functions used to set up the
best path with guaranteed bandwidth in the network.
Moreover, the Ethernet protocol offers interfaces at 10 Gbps. 10-Gigabit Ethernet (10 GE) has become a mature and affordable
technology, which can provide all the bandwidth that SAN extension needs, whether via DWDM systems or via fiber Ethernet
Networks in the near future.
A carrier-class fiber Ethernet achieves sub-50ms protection (in the core network as a standard feature and at the access as an
option) by utilizing MPLS Fast Reroute. Unlike a SONET/SDH network, this protection system has the advantage of functioning
in any network topology: point-to-point, ring, meshed.
A large public data transport network adapted to SAN extension covers a wide geographic zone and supports many users. In
this network, trouble-shooting gets more difficult and OAM (Operation, Administration, Maintenance) becomes critical. Network
operators have to pinpoint problems and reduce service downtime. Traditional Ethernet had few OAM functions. Now, a lot
of work by standardization bodies (IEEE 802.3 Working Group and the Metro Ethernet Forum) plans to add OAM functions to
this protocol. Some vendors have already integrated pre-standard OAM functions such as Ethernet loop-back, BER detection,
SLA Measurements, and alarms on critical problems into their products.
10
4.1.4 scalability in SONET/SDH. Connections provide natural separation
between customer traffic and thus ensure data security and
Enterprise-class Ethernet networks have intrinsic limitations confidentiality for enterprise data. Unlike an IP network that
when used as a public transport network. These limitations is connectionless, a carrier-class fiber Ethernet network ta-
include the number of VLANs configurable per network, the kes advantage of this connection-oriented feature to ensure
number of MAC addresses that must be learned and stored data security and confidentiality so that enterprises do not
in the device, and the long and non-deterministic conver- have to use expensive and inefficient encryption systems for
gence time of the STP. their data traffic.
(Spanning Tree Protocol) that was initially designed to avoid 4. High availability. On a carrier-class fiber Ethernet network,
the loops in an Ethernet network but is used by some if a network link failure occurs, the fail-over time is less than
vendors as a core network protection system. To resolve 50 ms thanks to flow protection mechanisms (in the core
these scalability issues, MPLS is widely used in carrier-class network and at the access if it is secured by loop) introdu-
fiber Ethernet networks and has become a mature and very ced by MPLS Fast Reroute.
scalable solution.
5. Long distance. SAN extension service can thus be provi-
ded both over a metropolitan area and between two areas
4.2 a network adapted to by extending the carrier-class fiber Ethernet network on in-
11
device and over the same carrier-class fiber Ethernet network. Compared with Figure 1, LAN-to-LAN and SAN-to-SAN are
now consolidated into a single (and common) carrier-class fiber Ethernet network, which helps further reduce the cost of SAN
extension service.
12
5. conclusion
SAN Extension service provides a new opportunity for than DWDM. The carrier-class fiber Ethernet network
enterprises wanting to set up a Disaster Recovery Plan as is the ideal solution to meet these needs both technically and
a complement to current SAN extension solutions based on economically. As a premium service, SAN Extension via a
DWDM, which are essentially accessible to large enterprises. carrier-class fiber Ethernet network rounds out the Ethernet
As the business of many SMEs becomes increasingly service portfolio, which already includes LAN-to-LAN and
dependant on data and information, many of these enter- Internet Access already available on the market. With SAN
prises have started using storage devices and SANs to over an Ethernet network, France Telecom solutions can
store and manage the ever-growing volume of their data. now cover all interconnection needs of enterprises, from
They want to share and access this data and information small to large, and generate more revenues.
efficiently. And they also have to build backup information
system centers to protect their data and information against
human error, terrorist attacks or natural disasters. A second
data center can also be used to share the traffic load and
thus improve response times. To interconnect their SAN
islands, enterprises require a more cost-effective solution
Figure 9. Carrier-class Ethernet: the new SAN extension solution for SMEs and large enterprises
1
6. acronyms
10 GE: 10 Gigabit Ethernet IT: Information Technology
ATM: Asynchronous Transfer Mode MAC: Media Access Control
BER: Bit Error Rate MPLS: Multi-Protocol Label Switch
CapEx: Capital Expenditure OAM: Operation Administration and Maintenance
CIR: Committed Information Rate OC: Optical Carrier
CRC: Cyclic Redundancy Check OpEx: Operation Expenditure
France Tlcom 6 place dAlleray 75505 Paris Cedex 15 - SA au capital de 10 412 239 188 euros - 380 129 866 RCS Paris - document non contractuel - juin 2006 www.aressy.com
DAS: Directly Attached Storage OSI: Open System Interface
DoS: Denial of Service PLR: Packet Loss Rate
DWDM: Dense Wavelength Division Multiplexing QoS: Quality of Service
EIR: Excess Information Rate SAN: Storage Area Network
E-LAN: Ethernet LAN SCSI: Small Computer Systems Interface
ESCON: Enterprise Systems Connection SDH: Synchronous Digital Hierarchy
FC: Fibre Channel SLA: Service Level Agreement
FCIP: Fibre Channel over IP SME: Small and Medium Enterprises
FICON: Fiber Channel-based Connection SONET: Synchronous Optical NETwork
GE: Gigabit Ethernet SSP: Storage Service Provider
GFP: Generic Framing Procedure STM: Synchronous Transfer Mode
GUI: Graphic User Interface TCP: Transmission Control Protocol
IFCP: Internet Fibre Channel Protocol TDM: Time Division Multiplexing
IP: Internet Protocol TE: Traffic Engineering
ISCSI: Internet Small Computer Systems Interface VCC: Virtual Circuit Connection
ISO: International Standard Organization VLAN: Virtual Local Area Network
14