FJ GC Eternus Ilm
FJ GC Eternus Ilm
FJ GC Eternus Ilm
1
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Contents
Contents .................................................................................................................................................. 2
1. Introduction ........................................................................................................................................ 4
2. Validation Objectives......................................................................................................................... 5
3. Validation Equipment ....................................................................................................................... 7
3.1. Fujitsu SPARC Enterprise ............................................................................................................................. 7
3.1.1. SPARC Enterprise M4000 ...............................................................................................................................7
3.1.2. SPARC Enterprise M3000 ...............................................................................................................................8
3.2. ETERNUS DX Disk Storage System ............................................................................................................ 9
3.2.1. RAID Migration ...............................................................................................................................................9
3.2.2. Eco-mode .......................................................................................................................................................10
3.2.3. ETERNUS SF Storage Cruiser and ETERNUS SF AdvancedCopy Manager................................................11
3.2.4. ETERNUS Multipath Driver ..........................................................................................................................11
3.2.5. Features of the Nearline Disk in the ETERNUS DX ......................................................................................12
4. Oracle Database 11g Functions ...................................................................................................... 13
4.1. Oracle Partitioning....................................................................................................................................... 13
4.2. Automatic Storage Management (ASM) ..................................................................................................... 13
4.3. Real Application Clusters (RAC) ................................................................................................................ 13
5. Validation Setup ............................................................................................................................... 15
5.1. System Configuration .................................................................................................................................. 15
5.1.1. Database Server (Online Business Server) .....................................................................................................16
5.1.2. Database Server (Tabulation Processing Server) ............................................................................................16
5.1.3. Storage ...........................................................................................................................................................16
5.1.4. Client ..............................................................................................................................................................17
5.2. Schemer Configuration ................................................................................................................................ 17
5.3. Application Model ....................................................................................................................................... 17
5.3.1. Online Processing...........................................................................................................................................18
5.3.2. Tabulation Processing ....................................................................................................................................18
6. Validation Details and Results ........................................................................................................ 19
6.1. ILM Using Oracle Standard Functions ........................................................................................................ 19
6.1.1. ILM and Physical Design Using Oracle Standard Functions..........................................................................19
6.1.2. Resource Use Status when Executing MOVE PARTITION ..........................................................................22
6.1.3. Impact of MOVE PARTITION on Operations...............................................................................................24
6.1.4. Summary of ILM using MOVE PARTITION ................................................................................................30
6.2. ILM Using RAID Migration........................................................................................................................ 31
6.2.1. Efficiency and Physical Design Using RAID Migration ................................................................................31
6.2.2. Impact of RAID Migration on Operations......................................................................................................36
6.2.3. Time Taken for RAID Migration ...................................................................................................................42
6.2.4. Summary of ILM Using RAID Migration ......................................................................................................42
6.3. Effects of Disk Performance Differences on Tabulation Processing ........................................................... 43
6.4. Summary of ILM Validation ....................................................................................................................... 44
7. Backup .............................................................................................................................................. 46
7.1. OPC Backup Performance ........................................................................................................................... 46
7.1.1. Single Volume Backup Performance..............................................................................................................46
7.1.2. Performance for Whole Database Backup ......................................................................................................48
2
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
7.2. Summary of Backup Validation .................................................................................................................. 51
8. Summary........................................................................................................................................... 52
9. Appendix ........................................................................................................................................... 53
9.1. Using Eco-mode to Reduce Power Consumption ........................................................................................ 53
9.2. Scheduled Stoppage Using Power Interlock Function ................................................................................. 53
9.3. Additional Notes on ILM Based on RAID Migration ................................................................................. 55
9.3.1. Support for Increased Data Volumes..............................................................................................................55
9.3.2. Support for Data Reduction ............................................................................................................................57
9.4. Disk Cost ..................................................................................................................................................... 58
3
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Introduction
1. Introduction
The rapidly changing economic climate has generated a pressing need to reappraise corporate
investments. IT investment is no exception, and there is growing pressure for a clear, detailed
awareness of the specifics of IT costs and ways to lower such costs. At the same time, with corporate
business becoming increasingly computerized, the data volumes handled by corporate systems have
continued to grow. Exacting requirements for long-term storage of business data to satisfy revised
laws and compliance requirements have dramatically increased the data volumes retained and stored
on various systems. The volume of data handled by systems is said to have increased three-fold over
the most recent two-year period, and a major focus in the quest to reduce IT costs has been
minimizing rising storage costs associated with storing this data.
Fujitsu Limited and Oracle Corporation Japan have developed a joint Information Lifecycle
Management (ILM) solution to help customers manage their data both in the short term and long
term. This solution combines the Fujitsu ETERNUS DX storage system (hereinafter called
“ETERNUS DX”) with the Oracle Database 11g database to handle growing data management and
storage costs generated by rising data volumes. This ILM solution is based on a business model that
assumes access frequencies for business data fall off over time. It optimizes management and storage
costs over data lifecycles by sequentially transferring archival data with lower access frequencies to
lower cost storage areas, thereby making optimal use of storage facilities.
This ILM solution deploys a hierarchical storage system that combines a high-speed fibre channel
(FC) disk and a high-capacity low-cost nearline serial ATA (SATA) disk into a single ETERNUS
DX package. The database partitions the data tables covered by the ILM in chronological order,
using Oracle Database 11g’s Oracle Partitioning. This system reduces and optimizes overall storage
costs by allocating partitions covered by the ILM to optimized storage areas.
This whitepaper examined partition transfers using Oracle Database MOVE PARTITION and data
transfers using the RAID migration function specific to ETERNUS DX to move data within storage
facilities and to establish procedures for the database ILM using a customer’s system environment.
We examined two methods to establish database ILM best practices for various situations,
considering factors such as impact on business operations and load on server CPUs.
The design and development of Fujitsu’s SPARC Enterprise and ETERNUS DX provide numerous
energy-conserving features. This validation demonstrated that storage system energy consumption
can be cut without affecting system performance by shutting down the disk used for the database
backup area (except during backup acquisition) with the ETERNUS DX energy-saving “Eco-mode”
function. This reduces power and IT system operating costs.
This document discusses best practices for database ILM solutions based on a combination of the
Fujitsu ETERNUS DX disk storage system, SPARC Enterprise Unix server, and Oracle Database
11g to efficiently store and manage database data over extended periods, optimize storage costs, and
cut energy consumption.
4
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Objectives
2. Validation Objectives
If the vast amount of corporate data handled is examined in closer detail, we see that the certain data
has the following characteristics.
• Recent data is accessed frequently and is often subject to high response requirement processing
(e.g., for online business operations).
• Access frequency decreases as data becomes older, and demand for performance decreases
accordingly.
This is true for data such as order history tables, where records are added and then stored as
long-term records. Information lifecycle management (ILM) is a term that refers to managing data
with such characteristics by optimizing costs and service levels to suit access frequency, processing
requirements, and data retention period.
ILM involves optimizing data storage costs by moving data accessed less frequently from costly
high-speed and limited capacity disks to lower-performance, high-capacity, low-cost disks.
ILM makes it possible to provide the storage capacity needed to store all of the data that needs to be
available, regardless of access frequency, at reduced cost. Since nearline disks generally offer greater
capacity per drive than high-speed disks, they make it possible to use fewer actual disk drives and
lower potential energy requirements.
Oracle Database includes the Oracle Partitioning function for partitioning tables based on the date on
which data was added. Although each table is normally stored in one table space, partitioning allows
the table to be divided into partitions and each partition assigned to a specific tablespace. ILM is
applied to the database by partitioning corresponding tables within the database in accordance with
data storage policies and assigning them to the table space corresponding to the service levels
required for the respective partition.
The Fujitsu ETERNUS DX disk storage system offers the advantage of allowing the incorporation of
a high-speed, high-reliability FC disk together with a high-capacity, low-cost nearline SATA disk
within the same package. ILM can then be achieved within a single storage package by using Oracle
Database to partition tables and allocating recent, frequently accessed data to the high-speed FC disk
and less frequently accessed archival data to the nearline SATA disk. ETERNUS DX also includes a
RAID migration function for transferring logical volumes to separate RAID groups, allowing logical
volumes on a RAID group consisting of an FC disk to be transferred within the storage space to a
nearline SATA disk. No database server resources are used, since data transfers based on RAID
migration operate entirely within the storage space, minimizing impact on database. While MOVE
PARTITION is generally used to move data in Oracle Database, using RAID migration in
conjunction with ETERNUS DX offers a choice of methods for moving data.
The following validation was performed by Fujitsu and Oracle Japan using the GRID Center joint
validation center to establish design and operating procedures for ILM using ETERNUS DX and
Oracle Database.
5
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Objectives
6
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Equipment
3. Validation Equipment
This section describes the equipment used in this validation.
SPARC Enterprise is available in the form of the SPARC Enterprise M9000, M8000, M5000,
M4000, and M3000 to offer mainframe-comparable reliability for mission-critical operations
to meet the various operational issues companies face, such as speed, continuity, TCO
reductions, and environmental obligations. Other SPARC Enterprise formats include SPARC
Enterprise T5440, T5240, T5220, T5140, and T5120, which offer high throughput ideal for
Web front-end operations and application servers.
The SPARC Enterprise M4000 is a mid-range class server that incorporates the functions
traditionally offered by high-end servers, including high performance, high reliability, and
virtualization technology. It provides the reliability of mainframes, using a SPARC64
VI/VII high-performance processor to achieve up to 16-core and 32-thread multi-core,
multi-thread configurations in a single device.
Mission-critical operations, including those using database and batch processing, typically
have high loads per transaction and sometimes require specific account processing
sequences. The SPARC64 VI/VII high-performance processor was developed to handle
such high-speed processing of high-load transactions. High performance is achieved using
new multi-core multi-thread technology in addition to powerful parallel command
processing capabilities and precision command branching prediction technologies.
Overall system performance is further increased with a strengthened system bus and use
of PCI Express.
The SPARC Enterprise M3000 also features the high level of reliability associated with
the M4000 to M9000 models, ensuring high reliability through system reliability at the
LSI, unit, and system levels.
8
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Equipment
1. Ensure business continuity and scalability for large data volumes without delays.
2. Ensure data integrity and security for correct data storage.
3. Ensure appropriate and flexible corporate-level access to large-volume data while
minimizing TCO.
Figure 3-1 Example of 300GB Disks RAID5(3+1) configuration moving to different capacity of
450GB Disks RAID5(3+1) , and add other logical volume(LUN2) in free space
9
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Equipment
Figure 3-2 Example of 300GB Disks RAID5(3+1) configuration moving to different level of RAID Group
RAID1+0
3.2.2. Eco-mode
The ETERNUS DX features an “Eco-mode” (MAID technology 2 ) for controlling disk
drive rotation only when necessary to suit customer system requirements.
Eco-mode is a mode for reducing power consumption by shutting down, for specific
periods, disks to which access is restricted to certain timeframes. Shutdown scheduling
involves setting the individual disk and time for individual RAID groups. This can also be
set to coincide with operations such as backup.
Disk drives utilizing Eco-mode are shut down if no access occurs for more than a variable
preset duration. If a disk access is attempted when a disk is shut down, it takes
approximately one minute to restart the disk to permit access.
2
MAID (Massive Array of Idle Disks) technology. Technology for reducing power consumption and extending
the service life of disk drives by shutting down rarely-accessed disk drives.
10
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Equipment
3
Fujitsu comparison figures for configuration shutdown for 20 hours daily and normal backup volume disk
operations.
4
For more information about Eco-mode , please refer to the following white paper:
“Energy-savings using the Fujitsu ETERNUS disk array MAID technology”
http://storage-system.fujitsu.com/jp/products/diskarray/download/pdf/MAID_whitepaper.pdf
11
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Equipment
12
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Oracle Database 11g Functions
This validation used Oracle Database 11g with range partitioning. Oracle Database 11g also
expanded functions to handle large numbers of partitions. For more detailed information,
refer to the white paper, “Partitioning with Oracle Database 11g” 5
5
http://otndnld.oracle.co.jp/products/database/oracle11g/pdf/twp_partitioning_11gR1.pdf
13
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Oracle Database 11g Functions
14
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Setup - System Configuration
5. Validation Setup
This validation was performed with an RAC configuration composed of two database servers and
one storage device.
RX300
1000Base-T
1000Base-T 1000Base-T
【Storage】
15
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Setup - System Configuration
5.1.3. Storage
Model Fujitsu ETERNUS4000 Model 500
Disk drive Fibre channel disk
146 GB (15,000 rpm) × 38
73 GB (15,000 rpm) × 12
Nearline SATA disk
750 GB (7,200 rpm) × 10
Note*: ETERNUS DX440 is the successor of ETERNUS4000 Model 500.
16
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Setup - Schemer Configuration
5.1.4. Client
Hardware
Model Fujitsu PRIMERGY RX300
CPU Xeon E5540 (Quad-core, 2.53GHz)
Memory 8GB (4GBx2)
Internal HDD SAS 300GB(15K) x 3 (RAID-5)
Software
OS Windows Server 2003 R2
Storage management ETERNUS SF AdvancedCopy Manager
ETERNUS SF Storage Cruiser
For detailed information on the hardware and software used by Fujitsu, please contact
your Fujitsu sales representative.
17
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Setup - Application Model
These two transactions are executed in multiples of 150. One process executes 10
transactions, with order transactions executed at a ratio of nine to every order change
transaction.
18
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - ILM Using Oracle Standard Functions
datafile datafile
2008 2007
ASM Tier
Diskgroup
OS Tier
Logical Volume Logical Volume Logical Volume
Storage Tier
disk slice disk slice disk slice
RAID Group
Figure 6-1 shows an example with 2007 and 2008 data model used in this validation.In
the above example, 12 months of data stored in a single tablespace and a single data file.
While 1 DiskGroup to multiple data files are stored. DiskGroupis composed of multiple
logical units.
Operations after introducing ILM are expected to involve periodic operations to move
older partitions to disks to suit the current access frequency. Periodically transferring data
with lower access frequency to low-cost disks prevents the increase of data on
high-performance disks and enables the overall increase in data to be handled by adding
low-cost disks, reducing the cost associated with adding disks.
ILM based on Oracle standard functions is considered for the ORDERSFACT table used
in this validation. The tables covered by operations in ILM are shown below together with
the transfer source and destination table space.
Tables covered by ILM operations: ORDERSFACT table
19
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - ILM Using Oracle Standard Functions
DG_FC DG_SATA
TS_2009
TS_2009 TS_2008_OLD
・・・ ・・・
TS_2010 TS_2007_OLD
・・・ ・・・
・
・
・
TS_2004_OLD
・・・
① Add TS_2010 tablespace and
Partitions
The partition added excludes tabulation information, which could change the
execution schedule and degrade performance. Similarly, newly added partitions have
very few data entries compared to other partitions, and the optimizer statistics may
differ significantly. This may also change the execution schedule and degrade
performance. Operations that copy the 2009 partition optimizer tabulation
information to the 2010 partition prevent performance degradations due to execution
schedule changes.
(ii) The partition and table space (2004 partition and table space here) are deleted to
remove partitions containing data that no longer needs to be retained. (Figure 6-3)
20
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - ILM Using Oracle Standard Functions
DG_FC DG_SATA
TS_2009
TS_2009 TS_2008_OLD
・・・ ・・・
TS_2010 TS_2007_OLD
・・・ ・・・
・
・
・
(iii) The transfer destination table space is created on the SATA disk. (Figure 6-4)
DG_FC DG_SATA
TS_2009
TS_2009 TS_2008_OLD
・・・ ・・・
TS_2010 TS_2007_OLD
・・・
・・・
・
・
・
TS_2009_OLD
(iv) The transfer source table space is set to “READ ONLY,” and the ALTER TABLE …
MOVE PARTITION statements are used to transfer the partition from the transfer
source table space to the transfer destination table space. (Figure 6-5)
21
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - ILM Using Oracle Standard Functions
⑤Drop tablespace.
FC Disk SATA Disk
DG_FC DG_SATA
TS_2009 TS_2008_OLD
・・・
TS_2007_OLD
TS_2010
・・・ ・・・
・
・
・
TS_2009_OLD
PARTITION sentence.
100%
80%
%idle
60%
%wio
%
40% %sys
%usr
20%
0%
time
If the MOVE PARTITION statement is executed while other operations are underway,
CPU usage will increase by roughly 10% while MOVE PARTITION is being executed.
Precautions are needed to avoid CPU shortfalls when executing MOVE PARTITION.
FC and SATA disk loads are checked next. Figure 6-7 shows the FC and SATA disk loads.
22
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - 42BResource Use Status when Executing MOVE PARTITION
100
80
60
%
40
20
0
time
100
80
60
%
40
20
0
time
Executing the MOVE PARTITION statement reads the data stored in the FC disk in
partition units and writes it to the SATA disk. The index is rebuilt once the data has been
transferred. Since the data has already been transferred to the SATA disk, it is read in from
the SATA disk, sorted, then written to the SATA disk.
For this validation, partitions were created for each month and partitions for each year’s
data were transferred together. This is why the graph shows 12 peaks for each partition.
Figure 6-8 illustrates the processing for individual partitions.
① ② ③
23
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of MOVE PARTITION on Operations
Data is transferred as in (i) and the index is rebuilt as in (ii) and (iii). The SATA disk load
for index rebuilding is three times that for the data transfer. Based on these results, we
recommend determining the SATA disk load status and executing the MOVE
PARTITION statement when the disk load is low.
The graph also shows that index rebuilding is a lengthy process. The time required to
execute the MOVE PARTITION statement depends on the size and quantity of the
indexes.
Figure 6-9 Throughput and response time for normal online processing
Figure 6-10 shows throughput and response times when 2009 data is transferred to the
SATA disk using the MOVE PARTITION statement with a node not used by online
processing during online processing. It shows relative values with respect to normal
mean throughput with response time set to 1.
24
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of MOVE PARTITION on Operations
MOVE PARTITION
Executing the MOVE PARTITION statement with a node not used for online
processing during online processing will have virtually no impact on online processing.
As shown in Figure 6-11, there is virtually no difference in execution times for the
MOVE PARTITION statement.
1.6
1.4
1.2
0.8
0.6
0.4
0.2
0
Online Application inactive Online Application Active
As shown above, the impact on online processing can be minimized when dividing
operations in an RAC setup and by executing the MOVE PARTITION statement on a
node with low load.
25
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of MOVE PARTITION on Operations
MOVE PARTITION
The graph shows how response times degrade at certain points. This is believed to be
due to past product search query delays occurring for the order change transaction due
to increased FC disk load when executing the MOVE PARTITION statement,
especially when transferring data. Figure 6-13 is a graph showing the FC disk busy
rate when executing the MOVE PARTITION statement during online processing.
MOVE PARTITION
The graph shows the time for which the disk busy rate reaches 100%. Of the order
change transactions, SQL statements searching for already ordered data appear to be
affected at these points.
We will now look at CPU use. Figure 6-14 is a graph showing CPU usage when only
online processing is performed and when the MOVE PARTITION statement is
executed at the same time. CPU usage is approximately 10% greater when executing
the MOVE PARTITION statement simultaneously than when performing online
processing alone.
26
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of MOVE PARTITION on Operations
MOVE PARTITION
These results show that when considering operations with a single DB setup, FC and
SATA disk loads are more important factors than CPU use. Online processing is likely
to be affected if the FC disk is subject to high loads.
MOVE PARTITION
Figure 6-15 Relative value of the transaction performance if divide the FC Disk
As can be seen in Figure 6-15, no major differences are apparent in performance for
each transaction compared with normal operations.
27
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of MOVE PARTITION on Operations
MOVE PARTITION
MOVE PARTITION
If the FC disks have high loads, the impact on transactions can be minimized by
allocating the partitions accessed by online processing and the partitions moved using
the MOVE PARTITION statement respectively to different RAID groups to distribute
the disk I/O.
Query Response
1
MOVE PARTITION
0
1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
executions
28
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of MOVE PARTITION on Operations
MOVE PARTITION
MOVE PARTITION
Figure 6-18 CPU usage and Disk Busy Percent when running queries
Figure 6-18 shows how the busy rate for the SATA disk increases by approximately
20% when executing the MOVE PARTITION statement compared to when executing
tabulation queries only. There are also many times at which the disk busy rate reaches
100%. It is believed that this leads to delays in query data reading, impacting
transactions.
As mentioned earlier, the disk load increases for index rebuilding compared to data
transfer. There is therefore a possibility of delays to MOVE PARTITION processing
also if index writing takes place at the same time as data reading using queries for the
SATA disk. Figure 6-19 shows the processing time as relative values when executing
the MOVE PARTITION statement with a node using tabulation processing with
respect to the normal MOVE PARTITION statement processing time of 1.
1.8
1.6
1.4
1.2
0.8
0.6
0.4
0.2
0
Aggregation query inactive Aggregation query active
The graph reveals that the time taken is approximately 1.6 times longer compared to
normal.
These results indicate the need to pay attention to access for the transfer destination
data when executing the MOVE PARTITION statement.
29
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Summary of ILM using MOVE PARTITION
6
[SPARC Enterprise and Oracle Database 11g Performance Validation]
http://primeserver.fujitsu.com/sparcenterprise/news/article/08/0527/
http://www.oracle.co.jp/solutions/grid_center/fujitsu/
30
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Efficiency and Physical Design Using RAID Migration
31
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Efficiency and Physical Design Using RAID Migration
Database Tier
Tablespace 2008 Tablespace 2007
ORDERSFACT Table
datafile datafile
2008 2007
ASM Tier
OS Tier
Logical Volume Logical Volume
Storage Tier
disk slice disk slice
Although the example above shows data for 12 months in a single tablespace and a single
data file, monthly table spaces can be used in actual practice, enabling data files to be
created for individual months. Similarly, although ILM is run annually, logical volumes
and disk groups can be provided for individual months, allowing operations with RAID
migration performed 12 times. However, this requires 60 disk groups (12 months × 5
years) just for ILM disk groups, exceeding the upper limit of 63 ASM disk groups
including master tables and disk groups for other areas. We recommend design that
minimizes the number of disk groups, with a one-to-one correlation of ILM data transfer
units to disk groups, to permit future system expansion.
Next we consider various operation procedures. The model used in this validation
involves the retention of the most recent whole year of data for online operations together
with 5 years of past data covered by tabulation operations. Since ILM data is moved in
1-year increments, logical volumes are provided for each year, as described above. The
data for the most recent year (2009) involves logical volumes on the RAID groups formed
using FC disks. The five years of earlier data is allocated to the five logical volumes for
each year created on the RAID groups formed using SATA disks.
32
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Efficiency and Physical Design Using RAID Migration
This comprises the initial allocation. The disk group, the table space, and partition for the
next year (2010) can be allocated to the logical volume (#7) provided. Online operations
increasingly involve the processing of 2010 data, while online operations on 2009 data
decline.
Once online operations apply to the 2010 partition and 2009 data is no longer used by
online operations, the 2009 data can be transferred to the SATA disk. The logical volume
(#6) containing the 2009 data on the FC disk is then transferred to the SATA disk by
RAID migration.
33
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Efficiency and Physical Design Using RAID Migration
Providing two RAID groups from the FC disk for latest data prevents disk I/O conflicts
due to online operations and RAID migration.
Since the data storage policy specifies five years, 2004 data is subject to deletion. Before
deleting the 2004 data, however, we first use RAID migration to move the logical volume
(#1) to the FC disk previously containing the 2009 data.
The logical volume is moved to enable use of this logical volume for the next year’s
(2011) data after the 2004 data is deleted. If the logical volume used for the 2004 data is
not to be moved, the logical volume is deleted on ETERNUS DX after deleting the 2004
data to confirm that the logical volume has been deleted with the ETERNUS multipath
driver. A logical volume must then be created on the FC disk to store the 2011 data,
requiring the addition of a logical volume, recognition of the logical volume by the
OS/driver, and creation of slices. Transferring the logical volume previously containing
the 2004 data to the FC disk eliminates the need for these procedures. Additionally,
34
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Efficiency and Physical Design Using RAID Migration
deleting data from the FC disk is faster than deleting data from the low-speed SATA disk.
The 2004 data should therefore be deleted following RAID migration.
The data is ultimately configured as shown in Figure 6-26. Logical Volume #1 is ready to
serve as the logical volume for the next year’s (2011) data, resulting in a configuration
more or less identical to the initial allocation. The ILM can be managed by the same
procedures for 2012 and subsequent years.
35
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of RAID Migration on Operations
This section described the procedures used to achieve ILM using RAID migration and the
various issues associated with physical database design for such ILM operation. The
differences from ILM using Oracle standard functions can be summarized as shown
below.
• Dedicated logical volumes are assigned to the units of data transfer in ILM.
• Two FC disks used alternately are provided to store the most recent data.
• Data exceeding its storage period is transferred to the FC disk by RAID migration
before being deleted.
For more information on these procedures, refer to “9.3 ILM Procedures Using RAID
Migration.”
1.2 6.0
1.0 5.0
0.8 4.0
Throughput
Response
Throughput
0.6 3.0
Response
0.4 2.0
0.2 1.0
0.0 0.0
time
Figure 6-27 Throughput and response time for normal online processing
Figure 6-28 shows online operational throughput and response times during RAID
migration to transfer 2009 data on the FC disk to the SATA disk. It gives values as relative
values, with normal mean throughput and response time defined as 1.
36
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of RAID Migration on Operations
1.2 6.0
RAID migration
1.0 5.0
0.8 4.0
Throughput
Response
Throughput
0.6 3.0
Response
0.4 2.0
0.2 1.0
0.0 0.0
time
Figure 6-28 Effect for Online Operations when execution RAID Migration
to move 2009 data to SATA Disks
We see that RAID migration has no effect on online operations, with both throughput and
response time for online operations remaining at a relative value of roughly 1 when using
RAID migration.
Figure 6-29 shows online operational throughput and response times during RAID
migration to transfer 2004 data on the SATA disk to the FC disk. It gives values as relative
values, with normal mean throughput and response time defined as 1.
1.2
RAID migration 6.0
1.0 5.0
0.8 4.0
Throughput
Response
Throughput
0.6 3.0
Response
0.4 2.0
0.2 1.0
0.0 0.0
time
Figure 6-29 Effect for Online Operations when execute RAID Migration
to move 2004 data to FC Disks
As when transferring 2009 data to the SATA disk by RAID migration, online operational
throughput and response times remain unaffected.
We next examined CPU usage and disk busy rates for the database server. We first
examined CPU usage and disk busy rates for normal operations.
37
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of RAID Migration on Operations
0% 0
time time
Figure 6-30 CPU usage and Disk Busy Percent of Database Server for normal online processing
CPU usage is approximately 40% overall, while the disk busy rate is around 50% to 60%
with a steady load applied.
Figures 6-31 and 6-32 show the database server CPU use and disk busy rates during
RAID migration. The load does not differ from normal operations in either case.
80% 80
RAID migration
60% %idle 60
%wio
40% %sys 40 RAID migration
%usr
20% 20
0% 0
time time
Figure 6-31 CPU usage and Disk Busy Percent when execute RAID Migration
to move 2009 Data from FC Disks to SATA Disks
80% 80
%idle
60% RAID migration %wio
60
40% %sys 40
20%
%usr
20 RAID migration
0% 0
time time
Figure 6-32 CPU usage and Disk Busy Percent when execute RAID Migration
to move 2004 Data from SATA Disks to FC Disks
CPU use does not differ from normal operations, since the database server CPU is not
used for RAID migration. Similarly, disk busy rates remain the same as for normal
operations, since the disk used for online operations is separate from the disk used for
RAID migration.
38
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of RAID Migration on Operations
CPU utilization
100%
80% %idle
60% %wio
%sys
40%
RAID migration %usr
20%
0%
time
Figure 6-33 CPU usage of Database Server while execute RAID Migration
Disk busy %
100
80
60 RAID migration
40
20
0
time
Figure 6-34 Influence of executing RAID Migration if you have a RAID group
39
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of RAID Migration on Operations
80% %idle 80
60% %wio 60
%sys
40% 40
%usr
20% 20
0%
0
time time
The SPARC Enterprise M3000 used for tabulation processing has a total of eight threads,
since it uses 1 CPU × 4 cores × 2 threads. Tabulation processing uses single multiplexing.
Thus, while CPU use is approximately 10%, the load more or less fully uses one thread.
The disk busy rate is at least 80%, indicating extremely high disk loads.
We examined the impact on operations of moving the 2009 data from the FC disk to the
SATA disk by RAID migration while using this tabulation processing in sequence on the
2008 data. Figure 6-36 shows the relative values for query response when the query
response for individual use is defined as 1.
Query Response
6
2
RAID migration
1
0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37
executions
Clearly, query response suffers serious delays during concurrent RAID migration. The
query response returns to its original value once RAID migration ends. We next examined
CPU usage and disk busy rate.
40
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Impact of RAID Migration on Operations
80% 80
60% %idle
60
%wio
40% RAID migration %sys 40 RAID migration
20% %usr
20
0% 0
time time
CPU use drops immediately after the start of RAID migration, returning to original levels
after RAID migration is complete. The disk busy rate reaches 100% immediately on
starting RAID migration. These results are believed to point to a disk bottleneck; reading
from the disk is suspended at the database due to the load imposed by tabulation
processing reading from the disk and writing from data using RAID migration. The time
required for the RAID migration here is approximately 40% longer.
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
RAID migration RAID migration
with QUERY
There is no impact on online operations, since the RAID group used for RAID migration
is separate from the RAID group used for online operations. Tabulation processing is
affected by RAID migration, since the RAID group used for RAID migration is not
distinct from the RAID group used for tabulation. Additionally, the RAID migration itself
takes longer. We recommend performing RAID migration from the FC disk to the SATA
disk when the SATA disk is not being accessed or when operations accessing the SATA
disk are suspended.
41
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Time Taken for RAID Migration
1.2
0.8
0.6
0.4
0.2
0
FC to SATA SATA to FC
RAID migration from the SATA disk to FC disk, which involves writing to the FC disk,
takes approximately half the time required for migration from the FC disk to the SATA
disk, which entails writing to the SATA disk. Performance requirements for SATA disks
tend to be lower than for FC disks due to capacity-related benefits. The time required for
RAID migration therefore varies significantly, depending on the type of disk to be
transferred to.
“6.1.2 Resource Use Status when Executing MOVE PARTITION” shows that the time
required for ILM operations varies with the configuration and number of indexes involved
in the MOVE PARTITION. For RAID migrations, since logical volumes are moved
within storage, the time required for RAID migration is determined by the RAID
configuration and volume size, regardless of the internal logical volume configuration,
provided the RAID migration disk I/O does not overlap that of operations.
42
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Effects of Disk Performance Differences on Tabulation Processing
RAID migration is clearly effective in moving data in ILM, but with the following
caveats:
z The system must be designed specifically for ILM operations.
For more information on design procedures, see “6.2.1 Efficiency and Physical Design
Using RAID Migration.”
z Space cannot be reduced using storage functions.
For more information on methods for reducing space, refer to “9.3.2 Support for Data
Reduction.”
z LUN concatenation 7 cannot be used to expand space.
For more information on ways to expand space, refer to “9.3.1 Support for Increased
Data Volumes.”
7
LUN concatenation: A technique for increasing existing logical volume capacity by separating unused space to
create a new logical volume and linking it to an existing logical volume.
43
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Summary of ILM Validation
Figure 6-41 is a graph comparing processing times for tabulation processing on FC and
SATA disks for a previous month sales comparison (Q1) and sales by employee (Q2).
The tabulation processing for previous month sales comparison (Q1) on the SATA disk takes
approximately 1.2 times the time required with the FC disk.
However, for tabulation processing for sales by employee (Q2), there is no significant
difference in time taken between the FC or SATA disks, likely because processing times for
sales by employee involve CPU-intensive queries such as GROUP BY and ORDER BY,
minimizing the effects of differences in disk performance.
1.4
FC
1.2 SATA
0.8
0.6
0.4
0.2
0
Q1 Q2
44
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Validation Details and Results - Summary of ILM Validation
[RAID migration]
The major advantage of RAID migration is that it does not consume OS resources (CPU),
since it operates within storage. This means ILM can be run without imposing loads on
operations.
[MOVE PARTITION]
MOVE PARTITION is a feature of OraclePartitioning, and does not require dedicated
ILM storage design for use with ILM.
It reduces the space needed, eliminates fragmentation, and compresses segments to
improve SATA disk access performance.
45
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Backup
7. Backup
Backing up data is an essential part of database operations. Based on the following techniques, it
also helps reduce storage costs and power consumption for data backup.
• Disk cost savings, reductions in numbers of disks, and power consumption savings by using
high-capacity, low-cost SATA disks for backup destination disks.
• Using Eco-mode to reduce power consumption by using disks only when required for backup.
(For more information on reducing power consumption, refer to “9.1 Using Eco-mode to Reduce
Power Consumption.”)
When a disk is shut down in Eco-mode, it takes approximately 1 minute to restart, resulting in
potential backup delays. However, Eco-mode scheduling can be used to set a specific daily time
(disk operation time) for backups, which addresses this problem by starting the disk before the
backup and shutting it down once again afterwards.
If you choose to use a SATA disk instead of an FC disk as the backup destination disk, you must
consider the potential impact on backup performance.
This section discusses an assessment of backup performance with FC and SATA backup disks.
46
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Backup
FC SATA
volume1 volume2
Almost 130GB Almost 130GB
1.FC to FC
FC SATA
volume3 volume4
Almost 130GB Almost 130GB
47
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Backup
2.5
1.5
0.5
0
FC to FC FC to SATA SATA to FC SATA to SATA
1.2
FC to FC
FC to SATA
1
SATA to FC
SATA to SATA
0.8
Write
0.6
0.4
0.2
0
0 1 2 Time
Figure 7-3 Compare to required time and amount written when volume backup
48
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Backup
Backup Source
RAID RAID
GROUP 1 GROUP 2
OPC(Multiplicity:1,2,4)
Performance
The outline diagrams for the multiple backups are shown below.
49
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Backup
When one volume back up by OPC completed, start to a next volume back up by
OPC.
OPC Section
・・・ start end
volume 1
OPC Section
・・・ start end
volume 2
OPC Section
・・・ start end
volume 3
OPC Section
・・・ start end
volume 4
50
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Backup
5
FC
SATA
4
0
1 2 4
multiplicity
Figure 7-8 Compare to required time by each multiplicity
51
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Summary
8. Summary
This validation clarified methods for moving data in ILM, accounting for the particular
characteristics of FC and SATA disks and revealing the storage configurations best suited to ILM.
MOVE PARTITION is better suited for moving data if the focus is on flexibility and ease of use.
ETERNUS DX RAID migration is better in minimizing the impact of data transfer on operations.
The selection between the two should be based on customer requirements.
We also determined design and operating methods for minimizing the impact on operations based on
the performance characteristics for moving data. Nearline SATA disks offer lower processing
performance than FC disks, making ILM storage design and size of operation I/O important factors
to be considered. For systems involving numerous I/O operations, storage design must account for
the access volume during and after data transfers to minimize effects on operation response for
achieving ILM.
The validation also highlights important points associated with storage configurations when using
Eco-mode or nearline disks to reduce power consumption for backup, a vital aspect of database
systems.
Using nearline SATA disks has a major impact on backup acquisition times. This means measures
like providing multiple backup destination disks are needed if acquisition time is crucial. Storage
design must balance backup requirements against cost considerations.
As described above, we established effective designs and methods to reduce storage costs using ILM,
while accounting for the characteristics of different disk types.
ILM involves a large number of predictable requirements, including storage design and performance
characteristics for different disk types. Fujitsu ETERNUS DX offers flexible support for a wide
range of storage requirements by enabling use of different disk types within a single package,
providing integrated management, and combining functions to minimize the impact on operations
when moving data simultaneously.
A database system incorporating ILM using Oracle Database, Fujitsu SPARC Enterprise, and
ETERNUS DX can cut storage costs and power consumption while retaining high performance and
reliability.
52
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Appendix
9. Appendix
One possible application for Eco-mode is disk-to-disk backups. The backup target disk in
disk-to-disk backups is normally accessed only for backups.
ETERNUS DX can define RAID groups for backup target capacity and use Eco-mode
control to operate the RAID group for backups only. This helps reduce power consumption
for nearline disk drives and reduces air-conditioning needs.
53
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Appendix
and memory). Cooling efficiency is increased by splitting the package into two cooling
groups. The speed of the cooling fans can be controlled to achieve power savings by
reducing cooling fan speeds.
Fujitsu actively seeks to reduce energy consumption for entire systems, in addition to energy
savings for individual devices such as servers and storage units. Two such measures are the
Power Interlock Function and Scheduling Function. These link the power supply control for
the server and peripheral devices and schedule system operations to reduce power
consumption by automatically switching off power to devices at times when system
operations are not required—for example, at night or during holidays. This helps reduce CO2
emissions and energy costs.
The SPARC Enterprise M4000, SPARC Enterprise M3000, and ETERNUS DX400 used in
this validation include Remote Cabinet Interface (RCI) as a power interlock function
interface. The power supply to other devices can be shut off automatically in sync with the
master server power supply by connecting them with RCI cables.
The SPARC Enterprise M9000, M8000, M5000, M4000, and M3000 can also be scheduled
to turn on and off device power automatically.
These two power interlock and scheduling functions can be deployed to automatically turn
on or off all devices connected using RCI cables at specified times.
Shutting down the system normally entails shutting down the server, then shutting down the
storage unit (the reverse of the startup procedure). Without power interlocking, this entails
turning off (or on) power to each device in the specified sequence and shutting down at the
end of the work day, and then starting up early the next morning for the start of the work day
to avoid affecting business operations.
Configuring such automatic operations with conventional servers and storage products
requires a separate power supply control unit and operation management software, as well as
significant labor requirements if manual intervention is required late at night or early in the
morning.
Deploying SPARC Enterprise and ETERNUS DX connected via RCI eliminates the startup
and operating costs associated with the additional equipment and software typically
required; it also eliminates personnel costs.
Combining Oracle Database with ETERNUS DX and SPARC Enterprise and using the
power interlock and scheduling functions in conjunction with ILM improves efficiency and
cuts costs and CO2 emissions.
Example of cost savings achieved using scheduled stoppages with the configuration
used in this validation
This compares the cost of running the configuration used in this validation continuously for
54
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Appendix
one year against the use of scheduled stoppages involving operations for 18 hours/day and
shutdown periods of 6 hours/day on weekdays (240 days/year) and shutdown periods for 24
hours/day on holidays (125 days/year).
High
Cost of electricity
CO2 emission
Low
At Implementation of At Implementation of
continuous operation scheduled stoppage
Table space sizes may be inadequate and need to be expanded when online operations
expand.
The following three methods are available for expanding table space size.
z Expand logical volume size using RAID migration.
55
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Appendix
Note that the table space size should include an additional margin, since this method
only allows size to be expanded for RAID migration from SATA to FC disks. If sizes
remain inadequate for unexpected increases in operating volumes, use methods (2) or
(3) below instead.
z This method cannot be used for BIGFILE type table space due to the
one-data-file-per-table space limit.
It may be desirable to reduce the size of logical volumes when moving logical volumes
containing old data to a SATA disk if space size estimates are incorrect. Logical volume
sizes cannot be reduced using storage functions; other methods must be applied.
Two methods for reducing the size of logical volumes are shown below.
z ILM using MOVE PARTITION
8
LDE (Logical Device Expansion): Function allowing increases in memory capacity by adding new disks to the
RAID group without halting operations
57
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Appendix
FC
RAID Group #1
RAID1+0(4+4)
New DATA
・・・
New DATA
FC FC FC FC FC
RAID Group #2 RAID Group #3 RAID Group #4 RAID Group #5 RAID Group #6
RAID5(4+1) RAID5(4+1) RAID5(4+1) RAID5(4+1) RAID5(4+1) SATA
Old DATA Old DATA Old DATA Old DATA Old DATA
・・・
・・・
・・・
・・・
・・・
IF
?
Old DATA Old DATA Old DATA Old DATA Old DATA
The example uses a disk configuration in which all data is stored on FC disks (Table 9-1).
58
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Appendix
The capacity available for storing old data is provided by RAID groups 2, 3, 4, 5, and 6,
totaling 146 GB × 20 (4 × 5 groups) = 2,920 GB. Replacing this with high-capacity, low-cost
SATA disks (750 GB, 7,200 rpm) gives 750 GB × 4 = 3,000 GB for RAID5 (4+1), cutting
the number of disks from 25 to 5.
In comparison with the cost of the disk drives in Figure 9-1, this configuration based on
SATA disks cuts disk costs by roughly 60%.
Using high-capacity, low-cost SATA disks can significantly reduce the number of disks
required and storage costs.
59
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.
Oracle Corporation Japan FUJITSU LIMITED
2-5-8, Kita Aoyama, Minato-ku, Tokyo 1-5-2, Higashi-Shimbashi, Minato-ku,
Tokyo
107-0061 Japan
105-7123 Japan
Copyright © 2009-2010, Oracle Corporation Japan. All Rights Reserved.
Copyright © 2009-2010, FUJITSU LIMITED, All Rights Reserved
Duplication prohibited
This document is provided for informational purposes only. The contents of the document are subject to change
without notice. Neither Oracle Corporation Japan nor Fujitsu Limited warrant that this document is error-free, nor do
they provide any other warranties or conditions, whether expressed or implied, including implied warranties or
conditions concerning merchantability with respect to this document. This document does not form any contractual
obligations, either directly or indirectly. This document may not be reproduced or transmitted in any form or by any
means, electronic or mechanical, for any purpose, without prior written permission from Oracle Corporation Japan.
This document is intended to provide technical information regarding the results of verification tests conducted at the
Oracle GRID Center. The contents of the document are subject to change without notice to permit improvements.
Fujitsu Limited makes no warranty regarding the contents of this document, nor does it assume any liability for
damages resulting from or related to the document contents.
Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of Oracle Corporation in the United States and
its subsidiaries and affiliates. Other product names mentioned are trademarks or registered trademarks of their
respective companies.
Intel, Xeon are trademarks, or registered trademarks of Intel Corporation or its subsidiaries in the United States and
other countries.
Red Hat is registered trademark or trademark of Red Hat,Inc in United States and other countries.
Linux is the trademark of Linus Torvalds.
UNIX is a registered trademark of The Open Group in the United States and other countries.
All SPARC trademarks are used under license from SPARC International, Inc., and are registered trademarks of that
company in the United States and other countries. Products bearing the SPARC trademark are based on an
architecture developed by Sun Microsystems, Inc.
SPARC64 is used under license from U.S.-based SPARC International, Inc. and is a registered trademark of that
company.
Sun, Sun Microsystems, the Sun logo, Solaris, and all Solaris-related trademarks and logos are trademarks or
registered trademarks of Sun Microsystems, Inc. in the United States and other countries, and are used under license
from that company.
Other product names mentioned are the product names, trademarks, or registered trademarks of their respective
companies.
Note that system names or product names in this document may not be accompanied by trademark notices(®, ™).
60
Copyright © 2009-2010 FUJITSU LIMITED, All Rights Reserved
Copyright © 2009-2010 Oracle Corporation Japan. All Rights Reserved.