ARC1880 Manual
ARC1880 Manual
ARC1880 Manual
ARC-1880 Series
(PCIe 2.0 to 6Gb/s SAS RAID Controllers)
USERS Manual
Version: 1.11 Issue Date: December, 2011
FCC Statement
This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation.
Contents
1. Introduction .............................................................. 10
1.1 Overview ....................................................................... 10 1.2 Features ........................................................................ 12
3.7.3 Volume Set Function ................................................... 60 3.7.3.1 Create Volume Set (0/1/10/3/5/6) ........................... 61 Volume Name ................................................................ 63 Capacity ....................................................................... 64 Stripe Size .................................................................... 66 SCSI Channel ................................................................ 66 SCSI ID ........................................................................ 67 Cache Mode .................................................................. 68 Tag Queuing .................................................................. 68 3.7.3.2 Create Raid30/50/60 (Volume Set 30/50/60) ............ 69 3.7.3.3 Delete Volume Set ................................................. 70 3.7.3.4 Modify Volume Set ................................................. 70 3.7.3.5 Check Volume Set.................................................. 72 3.7.3.6 Stop Volume Set Check .......................................... 73 3.7.3.7 Display Volume Set Info. ........................................ 73 3.7.4 Physical Drives ........................................................... 74 3.7.4.1 View Drive Information .......................................... 74 3.7.4.2 Create Pass-Through Disk ....................................... 75 3.7.4.3 Modify Pass-Through Disk ....................................... 75 3.7.4.4 Delete Pass-Through Disk ....................................... 75 3.7.4.5 Identify Selected Drive ........................................... 76 3.7.4.6 Identify Enclosure .................................................. 76 3.7.5 Raid System Function ................................................. 77 3.7.5.1 Mute The Alert Beeper ............................................ 78 3.7.5.2 Alert Beeper Setting ............................................... 78 3.7.5.3 Change Password .................................................. 79 3.7.5.4 JBOD/RAID Function .............................................. 79 3.7.5.5 Background Task Priority ........................................ 80 3.7.5.6 SATA NCQ Support ................................................. 81 3.7.5.7 HDD Read Ahead Cache.......................................... 81 3.7.5.8 Volume Data Read Ahead........................................ 82 3.7.5.9 Hdd Queue Depth Setting ....................................... 82 3.7.5.10 Empty HDD Slot LED ............................................ 83 3.7.5.11 Controller Fan Detection ....................................... 84 3.7.5.12 Auto Activate Raid Set .......................................... 84 3.7.5.13 Disk Write Cache Mode ......................................... 85 3.7.5.14 Capacity Truncation .............................................. 85 3.7.6 HDD Power Management ............................................. 86 3.7.6.1 Stagger Power On .................................................. 87 3.7.6.2 Time to Hdd Low Power Idle ................................... 88 3.7.6.3 Time To Low RPM Mode ......................................... 88 3.6.7.4 Time To Spin Down Idle Hdd .................................. 89
3.7.7 Ethernet Configuration ............................................... 90 3.7.7.1 DHCP Function ...................................................... 90 3.7.7.2 Local IP address .................................................... 91 3.7.7.3 HTTP Port Number ................................................. 92 3.7.7.4 Telnet Port Number ................................................ 92 3.7.7.5 SMTP Port Number ................................................. 93 3.7.8 View System Events ................................................... 94 3.7.9 Clear Events Buffer ..................................................... 94 3.7.10 Hardware Monitor ..................................................... 95 3.7.11 System Information .................................................. 95
For Windows................................................................. 108 For Linux ..................................................................... 109 For FreeBSD ................................................................. 111 For Solaris 10 X86 ......................................................... 111 For Mac OS 10.X ........................................................... 112 ArcHttp Configuration .................................................... 112
6.1 Start-up McRAID Storage Manager ................................. 116 Start-up McRAID Storage Manager from Windows Local Administration................................................................ 117
Start-up McRAID Storage Manager from Linux/FreeBSD/Solaris/Mac Local Administration .......................................... 118 Start-up McRAID Storage Manager Through Ethernet Port (Out-of-Band) ............................................................... 118 6.2 6Gb/s SAS RAID controller McRAID Storage Manager ......... 119 6.3 Main Menu .................................................................. 120 6.4 Quick Function .............................................................. 120 6.5 Raid Set Functions ........................................................ 121 6.5.1 Create Raid Set ....................................................... 121 6.5.2 Delete Raid Set ........................................................ 122 6.5.3 Expand Raid Set ....................................................... 122 6.5.4 Offline Raid Set ........................................................ 123 6.5.5 Activate Incomplete Raid Set ..................................... 124 6.5.6 Create Hot Spare ..................................................... 125 6.5.7 Delete Hot Spare ...................................................... 125 6.5.8 Rescue Raid Set ....................................................... 126 6.6 Volume Set Functions .................................................... 126 6.6.1 Create Volume Set (0/1/10/3/5/6) ............................. 127 Volume Name .............................................................. 127 Volume Raid Level ....................................................... 128 Capacity ..................................................................... 128 Greater Two TB Volume Support ..................................... 128 Initialization Mode ........................................................ 128 Stripe Size .................................................................. 129 Cache Mode ................................................................ 129 Tagged Command Queuing ............................................ 129 SCSI Channel/SCSI ID/SCSI Lun .................................... 129 6.6.2 Create Raid30/50/60 (Volume Set 30/50/60) ............... 130 6.6.3 Delete Volume Set .................................................... 131 6.6.4 Modify Volume Set.................................................... 131 6.6.4.1 Volume Growth ................................................... 132 6.6.4.2 Volume Set Migration ........................................... 133 6.6.5 Check Volume Set .................................................... 133 6.6.6 Schedule Volume Check ............................................ 134 6.7 Physical Drive .............................................................. 135 6.7.1 Create Pass-Through Disk .......................................... 135 6.7.2 Modify Pass-Through Disk .......................................... 136 6.7.3 Delete Pass-Through Disk .......................................... 137 6.7.4 Identify Enclosure .................................................... 137 6.7.5 Identify Drive .......................................................... 138 6.8 System Controls ........................................................... 138 6.8.1 System Config ......................................................... 138
System Beeper Setting ................................................. 139 Background Task Priority ............................................... 139 JBOD/RAID Configuration .............................................. 139 SATA NCQ Support ....................................................... 139 HDD Read Ahead Cache ................................................ 139 Volume Data Read Ahead ............................................. 139 HDD Queue Depth ....................................................... 140 Empty HDD Slot LED .................................................... 140 SES2 Support .............................................................. 140 Max Command Length .................................................. 140 Auto Activate Incomplete Raid ....................................... 140 Disk Write Cache Mode ................................................. 140 Disk Capacity Truncation Mode ....................................... 141 6.8.2 Advanced Configuration............................................. 141 6.8.3 HDD Power Management ........................................... 144 6.8.3.1 Stagger Power On Control ..................................... 145 6.8.3.2 Time to Hdd Low Power Idle ................................. 145 6.8.3.3 Time To Hdd Low RPM Mode ................................. 146 6.8.3.4 Time To Spin Down Idle HDD ................................ 146 6.8.3.5 SATA Power Up In Standby ................................... 146 6.8.4 Ethernet Configuration ............................................. 146 6.8.5 Alert By Mail Configuration ....................................... 147 6.8.6 SNMP Configuration .................................................. 148 6.8.7 NTP Configuration .................................................... 148 6.8.8 View Events/Mute Beeper .......................................... 149 6.8.9 Generate Test Event ................................................. 150 6.8.10 Clear Events Buffer ................................................. 150 6.8.11 Modify Password ..................................................... 151 6.8.12 Update Firmware ................................................... 151 6.9 Information .................................................................. 152 6.9.1 Raid Set Hierarchy .................................................... 152 6.9.2 SAS Chip Information ............................................... 152 6.9.3 System Information .................................................. 153 6.9.4 Hardware Monitor ..................................................... 154
Upgrading Flash ROM Update Process .................................... 155 Battery Backup Module (ARC-6120BA-T113)........................... 159 SNMP Operation & Installation .............................................. 163
Event Notification Configurations ........................................ 175 A. Device Event .............................................................. 175 B. Volume Event ............................................................. 176 C. RAID Set Event .......................................................... 177 D. Hardware Monitor Event .............................................. 177
RAID Concept .................................................................... 179 RAID Set ......................................................................... 179 Volume Set ...................................................................... 179 Ease of Use Features ......................................................... 180 Foreground Availability/Background Initialization .............. 180 Online Array Roaming ................................................... 180 Online Capacity Expansion............................................. 180 Online Volume Expansion .............................................. 183 High availability .................................................................. 183 Global/Local Hot Spares .................................................. 183 Hot-Swap Disk Drive Support ........................................... 184 Auto Declare Hot-Spare ................................................. 184 Auto Rebuilding ............................................................ 185 Adjustable Rebuild Priority ............................................... 185 High Reliability ................................................................... 186 Hard Drive Failure Prediction............................................ 186 Auto Reassign Sector ...................................................... 186 Consistency Check ......................................................... 187 Data Protection .................................................................. 187 Battery Backup ............................................................. 187 Recovery ROM ............................................................... 188 Understanding RAID .......................................................... 189 RAID 0 ............................................................................ 189 RAID 1 ............................................................................ 190 RAID 10(1E) .................................................................... 191 RAID 3 ............................................................................ 191 RAID 5 ............................................................................ 192 RAID 6 ............................................................................ 193 RAID x0 .......................................................................... 193 Single Disk (Pass-Through Disk) ......................................... 194
INTRODUCTION
1. Introduction
This section presents a brief overview of the 6Gb/s SAS RAID controller, ARC-1880 series. (PCIe 2.0 to 6Gb/s SAS RAID controllers)
1.1 Overview
SAS 2.0 is designed for much higher speed data transfer than previous available and backward compatibility with SAS 1.0. The 6Gb/s SAS interface supports both 6Gb/s and 3Gb/s SAS/ SATA disk drives for data-intensive applications and 6Gb/s or 3Gb/s SATA drives for low-cost bulk storage of reference data. The ARC-1880 family includes 8-port external model as well as 12/16/24 internal ports with additional 4 external ports. The ARC1880LP/1880i/1880x support eight 6Gb/s SAS ports via one internal & one external/two internal/two external min SAS connector. The ARC-1880ix-12/16/24 or ARC-1880ixl-12 attaches directly to SATA/SAS midplanes with 3/4/6 SFF-8087 internal connector or increase capacity using one additional SFF-8088 external connector. When used with 6Gb/s SAS expanders, the controller can provide up to (128) devices through one or more 6Gb/s SAS JBODs, making it an ideal solution for enterprise-class storage applications that called for maximum configuration flexibility. ARC-1880LP/1880i/1880x/1880ixl-8/1880ixl-12 6Gb/s RAID controllers are low-profile PCI cards, ideal for 1U and 2U rack-mount systems. These controllers utilize the same RAID kernel that has been field-proven in existing external RAID controller products, allowing Areca to quickly bring stable and reliable PCIe 2.0 6Gb/s SAS RAID controllers to the market.
Unparalleled Performance
The 6Gb/s SAS RAID controllers raise the standard to higher performance levels with several enhancements including new high performance ROC Processor, a DDR2-800 memory architecture and high performance PCIe 2.0 x8 lane host interface bus interconnection. The low profile controllers by default support on-board 512MB of ECC DDR2-800 SDRAM memory. The ARC-1880ix-12/16/24 controllers each include one 240-pin DIMM socket with default 1GB
10
INTRODUCTION
of ECC DDR2-800 registered SDRAM, upgrade to 4GB. The optional battery backup module provides power to the cache if it contains data not yet written to the drives when power is lost. The test result is against overall performance compared to other 6Gb/s SAS RAID controllers. The powerful new ROC processors integrated 8 6Gb/s SAS ports on chip delivers high performance for servers and workstations.
Maximum Interoperability
The 6Gb/s SAS RAID controller support broad operating system including Windows 7/2008/Vista/XP/2003, Linux (Open Source), FreeBSD (Open Source), Solaris (Open Source), Mac, VMware and more, along with key system monitoring features such as enclosure management (SES-2, SMP, & SGPIO) and SNMP function. Our prod-
11
INTRODUCTION
ucts and technology are based on extensive testing and validation process; leverage Areca SAS or SATA RAID controller field-proven compatibility with operating systems, motherboards, applications and device drivers.
1.2 Features
Controller Architecture 800MHz RAID-on-Chip (ROC) processor PCIe 2.0 x8 lane host interface 512MB on-board DDR2-800 SDRAM with ECC (ARC-1880LP/ 1880i/1880x/1880ixl-8/1880ixl-12) One 240-pin DIMM socket for DDR2-800 ECC registered SDRAM module using x8 or x16 chip organization, upgrade from 1GB (default) to 4GB (ARC-1880ix-12/16/24) Write-through or write-back cache support Support up to 4/8/12/16/24 internal and/or 4/8 external 6Gb/s SAS ports Multi-adapter support for large storage requirements BIOS boot support for greater fault tolerance BIOS PnP (plug and play) and BBS (BIOS boot specification) support Support EFI BIOS for Mac Pro NVRAM for RAID event & transaction log Redundant flash image for controller availability Battery Backup Module (BBM) ready (Option) RoHS compliant
12
INTRODUCTION
RAID Features RAID level 0, 1, 10(1E), 3, 5, 6, 30, 50, 60, Single Disk or JBOD Multiple RAID selection Online array roaming Offline RAID set Online RAID level/stripe size migration Online capacity expansion and RAID level migration simultaneously Online volume set growth Instant availability and background initialization Support global and dedicated hot spare Automatic drive insertion/removal detection and rebuilding Greater than 2TB capacity per disk drive support Greater than 2TB per volume set (64-bit LBA support) Support intelligent power management to save energy and extend service life Support NTP protocol synchronize RAID controller clock over the on board Ethernet port Monitors/Notification System status indication through global HDD activity/fault connector, individual activity/fault connector, LCD/I2C connector and alarm buzzer SMTP support for email notification SNMP support for remote manager Enclosure management (SES-2, SMP and SGPIO) ready RAID Management Field-upgradeable firmware in flash ROM In-Band Manager Hot key "boot-up" McBIOS RAID manager via M/B BIOS Web browser-based McRAID storage manager via ArcHttp proxy server for all operating systems Support Command Line Interface (CLI) API library for customer to write monitor utility Single Admin Portal (SAP) monitor utility Out-of-Band Manager Firmware-embedded web browser-based McRAID storage manager, SMTP manager, SNMP agent and Telnet function via Ethernet port
13
INTRODUCTION
API library for customer to write monitor utility Support push button and LCD display panel (option) Operating System Windows 7/2008/Vista/XP/2003 Linux FreeBSD VMware Solaris 10/11 x86/x86_64 Mac OS 10.4.x/10.5.x/10.6.x
(For latest supported OS listing visit http://www.areca.com.tw) 6Gb/s SAS RAID controllers
Model name I/O Processor Form Factor(H x L) Host Bus Type Driver Connector Drive Support RAID Level On-Board Cache Management Port Enclosure Ready 3xSFF-8087 1xSFF-8088 ARC-1880ix-12 ARC-1880ix-16 RAID-on-Chip 800MHz Full Height: 98.4 x 250 mm PCIe 2.0 x8 Lanes 4xSFF-8087 1xSFF-8088 6xSFF-8087 1xSFF-8088 ARC-1880ix-24
14
INTRODUCTION
6Gb/s SAS RAID controllers
Model name I/O Processor Form Factor(H x L) Host Bus Type Driver Connector Drive Support RAID Level On-Board Cache Management Port Enclosure Ready 2xSFF-8087 ARC-1880i ARC-1880LP RAID-on-Chip 800MHz Low Profile: 64.4 x 169.5 mm PCIe 2.0 x8 Lanes 1xSFF-8087 1xSFF-8088 2xSFF-8088 ARC-1880x
Up to 128 6Gb/s and 3Gb/s SAS/SATA HDDs 0, 1, 1E, 3, 5, 6, 10, 30, 50, 60, Single Disk, JBOD 512MB on-board DDR2-800 SDRAM In-Band: PCIe Out-of-Band: BIOS, LCD, LAN Port Individual Activity/Faulty Header, SGPIO, SMP, SES-2
Up to 128 6Gb/s and 3Gb/s SAS/SATA HDDs 0, 1, 1E, 3, 5, 6, 10, 30, 50, 60, Single Disk, JBOD 512MB on-board DDR2-800 SDRAM In-Band: PCIe Out-of-Band: BIOS, LCD, LAN Port Individual Activity/Faulty Header, SGPIO, SMP, SES-2 (For External Port)
Note:
Low-profile bracket has included on the low profile board shipping package.
15
HARDWARE INSTALLATION
2. Hardware Installation
This section describes the procedures for installing the 6Gb/s SAS RAID controllers.
Package Contents
If your package is missing any of the items listed below, contact your local dealers before you install. (Disk drives and disk mounting brackets are not included) 1 x 6Gb/s SAS RAID controller in an ESD-protective bag 1 x Installation CD containing driver, relative software, an electronic version of this manual and other related manual 1 x User manual 1 x Low-profile bracket
16
HARDWARE INSTALLATION
3. (CN1) 4. (J10) 5. (J7) 6. (J9) 7. (J11) 8. (J1) 9. (J2) 10. (SCN1) 11. (SCN2) 12. (SCN3) 13. (SCN4) 14. (SCN5) 15. (SCN6)
SFF-8088 RJ45 10-pin header 24-pin header 24-pin header 4-pin header 8-pin header SFF-8087 SFF-8087 SFF-8087 SFF-8087 SFF-8087 SFF-8087
Note:
*1: You can download the ARC1880ix_1882ix Expander-CLI. PDF manual from http://www.areca.com.tw/support/main. htm.
17
HARDWARE INSTALLATION
18
HARDWARE INSTALLATION
19
HARDWARE INSTALLATION
Table 2-4, ARC-1880x connectors The following describes the ARC-1880 series link/activity LED.
Status When link LED illuminate that indicates the link LED is connected. The activity LED illuminate that indicates the adapter is active.
20
HARDWARE INSTALLATION
Front Side
Back Side
Connector Front Side 1. (J5) 2. (SCN4) 3. (J2) 4. (J1) 5. (J3) 6. (JP1) 7. (JP2) 8. (JP3) 9. (J4) 10. (SCN2) 11. (SCN1) 12. (SCN3)
Type Ethernet port SAS 5-8 Ports (External) Battery Backup Module Connector Manufacture Purpose Port Global Fault/Activity LED Individual Fault LED Header Individual Activity LED Header Individual Fault/Activity LED Header I2C/LCD Connector SAS 5-8 Ports (Internal) SAS 1-4 Ports (Internal) SAS 9-12 Ports (Internal)
Description RJ45 SFF-8088 12-pin box header 10-pin header 4-pin header 8-pin header 8-pin header 8-pin header 8-pin header SFF-8087 SFF-8087 SFF-8087
21
HARDWARE INSTALLATION
Tools Required
An ESD grounding strap or mat is required. Also required are standard hand tools to open your systems case.
System Requirement
The 6Gb/s SAS RAID controller can be installed in a universal PCIe slot and requires a motherboard that: ARC-1880 series 6Gb/s SAS RAID controller requires: Comply with the PCIe 2.0 x8 lanes It can work on the PCIe 2.0 x1, x4, x8, and x16 signal with x8 or x16 slot M/B. Backward-compatibe with PCIe 1.0
Installation Tools
The following items may be needed to assist with installing the 6Gb/s SAS RAID controller into an available PCIe expansion slot. Small screwdriver Host system hardware manuals and manuals for the disk or enclosure being installed.
Warning:
High voltages may be found inside computer equipment. Before installing any of the hardware in this package or removing the protective covers of any computer equipment, turn off power switches and disconnect power cords. Do not reconnect the power cords until you have replaced the covers.
22
HARDWARE INSTALLATION
Before opening the system cover, turn off power switches and unplug the power cords. Do not reconnect the power cords until you have replaced the covers.
Electrostatic Discharge
Static electricity can cause serious damage to the electronic components on this 6Gb/s SAS RAID controller. To avoid damage caused by electrostatic discharge, observe the following precautions: Do not remove the 6Gb/s SAS RAID controller from its anti-static packaging until you are ready to install it into a computer case. Handle the 6Gb/s SAS RAID controller by its edges or by the metal mounting brackets at its each end. Before you handle the 6Gb/s SAS RAID controller in any way, touch a grounded, anti-static surface, such as an unpainted portion of the system chassis, for a few seconds to discharge any built-up static electricity.
2.3 Installation
Use the following instructions below to install a PCIe 2.0 6Gb/s SAS RAID controller. Step 1. Unpack Unpack and remove the PCIe 2.0 6Gb/s SAS RAID controller from the package. Inspect it carefully, if anything is missing or damaged, contact your local dealer. Step 2. Power PC/Server Off Turn off computer and remove the AC power cord. Remove the systems cover. For the instructions, please see the computer system documentation. Step 3. Check Memory Module Be sure of the cache memory module is present and seated firmly in the DIMM socket (DDR2-800) for ARC1880ix-12/16/24 models. The physical memory configuration for ARC-1880ix series is one 240-pin DDR2-800 ECC registered SDRAM DIMM module.
23
HARDWARE INSTALLATION
Step 4. Install the PCIe 6Gb/s SAS RAID Cards To install the 6Gb/s SAS RAID controller, remove the mounting screw and existing bracket from the rear panel behind the selected PCIe 2.0 slot. Align the gold-fingered edge on the card with the selected PCIe 2.0 slot. Press down gently but firmly to ensure that the card is properly seated in the slot, as shown in Figure 2-5. Then, screw the bracket into the computer chassis. ARC-1880 series controllers require a PCIe 2.0 x8 slot.
Figure 2-5, Insert 6Gb/s SAS RAID controller into a PCIe slot Step 5. Mount the Drives You can connect the SAS/SATA drives to the controller through direct cable and backplane solutions. In the direct connection, SAS/ SATA drives are directly connected to 6Gb/s SAS RAID controller PHY port with SAS/SATA cables. The 6Gb/s SAS RAID controller can support up to 28 PHY ports. Remove the front bezel from the computer chassis and install the cages or SAS/SATA drives in the computer chassis. Loading drives to the drive tray if cages are installed. Be sure that the power is connected to either the cage backplane or the individual drives.
24
HARDWARE INSTALLATION
In the backplane solution, SAS/SATA drives are directly connected to 6Gb/s SAS system backplane or through an expander board. The number of SAS/SATA drives is limited to the number of slots available on the backplane. Some backplanes support daisy chain expansion to the next backplanes. The 6Gb/s SAS RAID controller can support daisy-chain up to 8 enclosures. The maximum drive no. is 128 devices through 8 enclosures. The following figure shows how to connect the external Min SAS cable from the 6Gb/s SAS RAID controller that has external connectors to the external drive boxes or drive enclosures.
Figure 2-6, External connector to a drive box or drive enclosure The following table is the max no. of 6Gb/s SAS RAID controller supported:
Disks/Enclosure Max No. 32 Expander 8 Disks/Controller 128 Volume 128
Note:
1. The maximum no. is 32 disk drives included in a single RAID set.
25
HARDWARE INSTALLATION
Step 6. Install SAS Cable This section describes SAS cable how to connect on controller.
26
HARDWARE INSTALLATION
Step 7. Install the LED Cable (option) The preferred I/O connector for server backplanes is the internal SFF-8087 connector. This connector has eight signal pins to support four SAS/SATA drives and six pins for the SGPIO (Serial General Purpose Input/Output) side-band signals. The SGPIO bus is used for efficient LED management and for sensing drive Locate status. See SFF 8485 for the specification of the SGPIO bus. For backplane without SGPIO supporting, Please refer to Section 2.4 LED cables for fault/activity LED cable installation. LED Management: The backplane may contain LEDs to indicate drive status. Light from the LEDs could be transmitted to the outside of the server by using light pipes mounted on the SAS drive tray. A small microcontroller on the backplane, connected via the SGPIO bus to a 6Gb/s SAS RAID controller, could control the LEDs. Activity: blinking 5 times/second and Fault: solid illuminated Drive Locate Circuitry: The location of a drive may be detected by sensing the voltage level of one of the pre-charge pins before and after a drive is installed. The following signals define the SGPIO assignments for the Min SAS 4i internal connector (SFF-8087) in the 6Gb/s SAS RAID controller.
PIN SideBand0 SideBand2 SideBand4 SideBand6 Description SClock (Clock signal) Ground SDataOut (Serial data output bit stream) Reserved PIN SideBand1 SideBand3 SideBand5 SideBand7 Description SLoad (Last clock of a bit stream) Ground SDataIn (Serial data input bit stream) Reserved
The SFF-8087 to 4 SATA with sideband cable which follows SFF-8448 specification. The SFF-8448 sideband signals cable is reserved for the backplane with header on it.The following signal defines the sideband connector which can work with Areca sideband cable on its SFF-8087 to 4 SATA cable.
27
HARDWARE INSTALLATION
The sideband header is located at backplane. For SGPIO to work properly, please connect Areca 8-pin sideband cable to the sideband header as shown above. See the table for pin definitions.
Note:
For lastest release versions of drivers, please download from http://www.areca.com.tw/support/main.htm
Step 8. Adding a Battery Backup Module (optional) Please refer to Appendix B for installing the BBM in your 6Gb/s SAS RAID controller. Step 9. Re-check Fault LED Cable Connections (optional) Be sure that the proper failed drive channel information is displayed by the fault LEDs. An improper connection will tell the user to Hot Swap the wrong drive. This can result in removing the wrong disk (one that is functioning properly) from the controller. This can result in failure and loss of system data. Step 10. Power up the System Throughly check the installation, reinstall the computer cover, and reconnect the power cord cables. Turn on the power switch at the rear of the computer (if equipped) and then press the power button at the front of the host computer. Step 11. Install the Controller Driver For a new system: Driver installation usually takes places as part of operating sys-
28
HARDWARE INSTALLATION
tem installation. Please refer to Chapter 4 Diver Installation for the detailed installation procedure. In an existing system: To install the controller driver into the existing operating system. For the detailed installation procedure, please refer to the Chapter 4, Driver Installation. Step 12. Install ArcHttp Proxy Server The 6Gb/s SAS RAID controller firmware has embedded the webbrowser McRAID storage manager. ArcHttp proxy server will launch the web-browser McRAID storage manager. It provides all of the creation, management and monitor 6Gb/s SAS RAID controller status. Please refer to the Chapter 5 for the detail ArcHttp Proxy Server Installation. For SNMP agent function, please refer to Appendix C. Step 13. Configure Volume Set The controller configures RAID functionality through the McBIOS RAID manager. Please refer to Chapter 3, McBIOS RAID Manager, for the detail. The RAID controller can also be configured through the McRAID storage manager with ArcHttp proxy server installed, LCD module (refer to LCD manual) or through on-board LAN port. For this option, please refer to Chapter 6, Web Browser-Based Configuration. Step 14. Determining the Boot Sequences For PC system: 6Gb/s SAS RAID controller is a bootable controller. If your system already contains a bootable device with an installed operating system, you can set up your system to boot a second operating system from the new controller. To add a second bootable controller, you may need to enter setup of motherboard BIOS and change the device boot sequence so that the 6Gb/s SAS RAID controller heads the list. If the system BIOS setup does not allow this change, your system may be not configurable to allow the 6Gb/s SAS RAID controller to act as a second boot device.
29
HARDWARE INSTALLATION
For Apple Mac Pro system: The currently Mac OS X 10.X can not directly boot up from 6Gb/s SAS controllers volume (We do not support the Open Firmware) on the Power Mac G5 machine and can only use as a secondary storage. All Intel based Mac Pro machines use EFI to boot (not Open Firmware, which was used for PPC Macs) the system. Areca controller has supported the EFI BIOS on its PCIe 2.0 6Gb/s SAS RAID controller. You have other alternatively to add volume set on the Mac Pro bootable device listing. You can follow the following procedures to add PCIe 2.0 6Gb/s SAS RAID controller on the Mac Pro bootable device listing. (1). Upgrade the EFI BIOS from shipping <CD-ROM>\Firmware\ Mac\ directory or from the www.areca.com.tw, if the controllers default ship with a legacy BIOS for the PC. Please follow the Appendix A Upgrading Flash ROM Update Process to update the legacy BIOS to EFI BIOS for Mac Pro to boot up from 6Gb/s SAS RAID controllers volume. (2).Ghost (such as Carbon Copy Cloner ghost utility) the Mac OS X 10.4.x, 10.5.x or 10.6.x system disk on the Mac Pro to the External PCIe 2.0 6Gb/s SAS RAID controller volume set. Carbon Copy Cloner is an archival type of back up software. You can take your whole Mac OS X system and make a carbon copy or clone to Areca volume set like an other hard drive. (3). Power up the Mac Pro machine, it will take about 30 seconds for controller firmware ready. This periodic will let the boot up screen blank before Areca volume in the bootable device list.
30
HARDWARE INSTALLATION
nal connectors, each of them can support up to four SAS/SATA drives. These controllers can be installed in a server RAID enclosure with standard SATA connectors backplane. The following diagram shows the picture of Min SAS 4i to 4*SATA cables. Backplane supports SGPIO header can leverage the SGPIO function on the 6Gb/s SAS RAID controller through the sideband cable. The SFF-8448 sideband signals cable is reserved for the backplane with header on it.
31
HARDWARE INSTALLATION
2.4.3 Internal Min SAS 4i (SFF-8087) to Internal Min SAS 4i (SFF-8087) cable
The 6Gb/s SAS RAID controllers have 1-6 Min SAS 4i internal SFF-8087 connectors, each of them can support up to four SAS/ SATA signals. These controllers can be installed in a server RAID enclosure with Min SAS 4i internal connectors backplane. This Min SAS 4i cable has eight signal pins to support four SAS/SATA drives and six pins for the SGPIO (Serial General Purpose Input/ Output) side-band signals. The SGPIO bus is used for efficient LED management and for sensing drive Locate status.
32
HARDWARE INSTALLATION
2.4.4 External Min SAS 4i Drive Boxes and Drive Expanders
The Min SAS 4x external cables are used for connection between the 6Gb/s SAS controller external connectors and connectors on the external drive boxes or drive expanders (JBOD). The 6Gb/s SAS controller has Min SAS 4x (SFF-8088) external connector, each of them can support up to four SAS/SATA signals.
33
HARDWARE INSTALLATION
Note:
A cable for the global indicator comes with your computer system. Cables for the individual drive LEDs may come with a drive cage, or you may need to purchase them. A: Individual Activity/Fault LED and Global Indicator Connector Most of the backplane has supported the HDD activity from the HDD. The 6Gb/s SAS RAID controller also provides the fault activity for fault LED. Connect the cables for the drive fault LEDs between the backplane of the cage and the respective connector on the 6Gb/s SAS RAID controller. The following table is the fault LED signal behavior.
LED Fault LED Normal Status When the fault LED is solid illuminated, there is no disk present. When the fault LED is off, then disk is present and status is normal. Problem Indication When the fault LED is slow blinking (2 times/sec), that disk drive has failed and should be hot-swapped immediately. When the activity LED is illuminated and fault LED is fast blinking (10 times/sec) there is rebuilding activity on that disk drive.
34
HARDWARE INSTALLATION
If the system will use only a single global indicator, attach the LED to the two pins of the global activity/cache write-pending connector. The global fault pin pair connector is the overall fault signal. This signal will light up in any disk drive failure. The following diagrams show all LEDs, connectors and pin locations.
Figure 2-13, ARC1880ix-12/16/24 individual LED for each channel drive and global indicator connector for computer case.
Figure 2-14, ARC-1880i individual LED for each channel drive and global indicator connector for computer case.
35
HARDWARE INSTALLATION
Figure 2-15, ARC-1880LP individual LED for each channel drive and global indicator connector for computer case.
Figure 2-16, ARC-1880x individual LED for each channel drive and global indicator connector for computer case.
Figure 2-17, ARC-1880ixl-8/12 individual LED for each channel drive and global indicator connector for computer case.
36
HARDWARE INSTALLATION
B: Areca Serial Bus Connector You can also connect the Areca interface to a proprietary SAS/ SATA backplane enclosure. This can reduce the number of activity LED and/or fault LED cables. The I2C interface can also cascade to another SAS/SATA backplane enclosure for the additional channel status display.
Figure 2-18, Activity/Fault LED serial bus connector connected between 6Gb/s SAS RAID controller & 4 SATA HDD backplane. The following picture and table is the serial bus signal name description for LCD & fault/activity LED.
PIN 1 3 5 7
Description Power (+5V) LCD Module Interrupt LCD Module Serial Data Fault/Activity Serial Data
PIN 2 4 6 8
Areca serial bus also supports SES (SCSI Enclosure Services) over I2C over internal I2C backplane cable. The backplane cable can connect the I2C signal from Areca controller to the backplane using IPMI-style I2C 3-pin connector. It means you link I2C cable into back plane, and let back plane LED indicate HardDisk fail status.
37
HARDWARE INSTALLATION
2.5 Hot-plug Drive Replacement
The RAID controller supports the ability of performing a hot-swap drive replacement without powering down the system. A disk can be disconnected, removed, or replaced with a different disk without taking the system off-line. The RAID rebuilding will be processed automatically in the background. When a disk is hot swap, the RAID controller may no longer be fault tolerant. Fault tolerance will be lost until the hot swap drive is subsequently replaced and the rebuild operation is completed.
Note:
The capacity of the replacement drives must be at least as large as the capacity of the other drives in the raid set. Drives of insufficient capacity will be failed immediately by the RAID controller without starting the Automatic Data Rebuild.
38
HARDWARE INSTALLATION
The software components configure and monitor the 6Gb/s SAS RAID controllers as following table.
Configuration Utility McBIOS RAID Manager McRAID Storage Manager (Via Archttp proxy server) SAP Monitor (Single Admin Portal to scan for multiple RAID units in the network, via ArcHttp proxy server) SNMP Manager Console Integration Operating System supported OS-Independent Windows 7/2008/Vista/XP/2003, Linux, FreeBSD, Solaris and Mac Windows 7/2008/Vista/XP/2003
McRAID Storage Manager Before launching the firmware-embedded web server, McRAID storage manager through the PCIe bus, you need first to install the ArcHttp proxy server on your server system. If you need additional information about installation and start-up of this function, see the McRAID Storage Manager section in Chapter 6. SNMP Manager Console Integration There are two ways to transport SNMP data on the 6Gb/s SAS RAID controller:In-Band PCIe host bus interface or Out-of-Band builtin LAN interface. Enter the SNMP Tarp IP Address option on the firmware-embedded SNMP configuration function for user to select the SNMP data agent-side communication from the Out-of-Band built-in LAN interface. To use In-Band PCIe host bus interface, keep blank on the SNMP Tarp IP Address option.
39
HARDWARE INSTALLATION
Out of Band-Using LAN Port Interface Out-of-band interface refers to transport SNMP data of 6Gb/s SAS controllers from a remote station connected to the controller through a network cable. Before launching the SNMP manager on clinet, you need first to enable the firmware-embedded SNMP agent function and no additional agent software inquired on your server system. If you need additional information about installation and start-up this function, see the section 6.8.4 SNMP Configuration. In-Band-Using PCIe Host Bus Interface In-band interface refers to management of the SNMP data of 6Gb/s SAS controllers from a PCIe host bus. In-band interface is simpler than out-of-band interface for it requires less hardware in its configuration.Since the SAS controller is already installed in the host system, no extra connection is necessary. Just load the necessary in-band Areca SNMP extension agent for the controllers. Before launching the SNMP agent in the sever, you need first to enable the firmware-embedded SNMP community configuration and install Areca SNMP extension agent in your server system. If you need additional information about installation and start-up the function, see the SNMP Operation & Installation section in the Appendix C. Single Admin Portal (SAP) Monitor This utility can scan for multiple RAID units on the network and monitor the controller set status. For additional information, see the utility manual (SAP) in the packaged CD or download it from the web site http://www.areca.com.tw.
40
BIOS CONFIGURATION
3. McBIOS RAID Manager
The system mainboard BIOS automatically configures the following 6Gb/s SAS RAID controller parameters at power-up: I/O Port Address Interrupt Channel (IRQ) Controller ROM Base Address Use McBIOS RAID manager to further configure the 6Gb/s SAS RAID controller to suit your server hardware and operating system.
The McBIOS RAID manager message remains on your screen for about nine seconds, giving you time to start the configuration menu by pressing Tab or F6. If you do not wish to enter configuration menu, press ESC to skip configuration immediately. When activated, the McBIOS RAID manager window appears showing a selection dialog box listing the 6Gb/s SAS RAID controllers that are installed in the system. The legend at the bottom of the screen shows you what keys are enabled for the windows.
41
BIOS CONFIGURATION
Areca Technology Corporation RAID Setup <V1.40, 2006/08/8>
ArrowKey Or AZ:Move Cursor, Enter: Select, **** Press F10 (Tab) to Reboot ****
Use the Up and Down arrow keys to select the controller you want to configure. While the desired controller is highlighted, press the Enter key to enter the main menu of the McBIOS RAID manager.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function Hdd Power Management Ethernet Configuration View System Events Clear Event Buffer Hardware Monitor System information
Note:
The manufacture default password is set to 0000; this password can be modified by selecting Change Password in the Raid System Function section.
Verify Password
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
42
BIOS CONFIGURATION
Expand RAID sets, Add physical drives, Define volume sets, Modify volume sets, Modify RAID level/stripe size, Define pass-through disk drives, Modify system functions and Designate drives as hot spares.
43
BIOS CONFIGURATION
3.5 Using Quick Volume /Raid Setup Configuration
Quick Volume / Raid Setup configuration collects all available drives and includes them in a RAID set. The RAID set you created is associated with exactly one volume set. You will only be able to modify the default RAID level, stripe size and capacity of the new volume set. Designating drives as hot spares is also possible in the Raid Level selection option. The volume set default settings will be:
Parameter Volume Name SCSI Channel/SCSI ID/SCSI LUN Cache Mode Tag Queuing 0/0/0 Write-Back Yes Setting ARC-1880-VOL#00
The default setting values can be changed after configuration is completed. Follow the steps below to create arrays using the Raid Set / Volume Set method:
Step 1 2 Action Choose Quick Volume /Raid Setup from the main menu. The available RAID levels with hot spare for the current volume set drive are displayed. It is recommended that you use drives of the same capacity in a specific array. If you use drives with different capacities in an array, all drives in the RAID set will be set to the capacity of the smallest drive in the RAID set. The numbers of physical drives in a specific array determines which RAID levels that can be implemented in the array. RAID 0 requires 1 or more physical drives. RAID 1 requires at least 2 physical drives. RAID 10(1E) requires at least 3 physical drives. RAID 3 requires at least 3 physical drives. RAID 5 requires at least 3 physical drives. RAID 3 +Spare requires at least 4 physical drives. RAID 5 + Spare requires at least 4 physical drives. RAID 6 requires at least 4 physical drives. RAID 6 + Spare requires at least 5 physical drives. Highlight the desired RAID level for the volume set and press the Enter key to confirm.
44
BIOS CONFIGURATION
3 The capacity for the current volume set is entered after highlighting the desired RAID level and pressing the Enter key. The capacity for the current volume set is displayed. Use the UP and DOWN arrow keys to set the capacity of the volume set and press the Enter key to confirm. The available stripe sizes for the current volume set are then displayed. Use the UP and DOWN arrow keys to select the current volume set stripe size and press the Enter key to confirm. This parameter specifies the size of the stripes written to each disk in a RAID 0, 1, 10(1E), 5 or 6 volume set. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512KB or 1M. A larger stripe size provides better read performance, especially when the computer preforms mostly sequential reads. However, if the computer preforms random read requests more often, choose a smaller stripe size. When you are finished defining the volume set, press the Yes key to confirm the Quick Volume And Raid Set Setup function. Foreground (Fast Completion) Press Enter key to define fast initialization or selected the Background (Instant Available) or No Init (To Rescue Volume). In the Background Initialization, the initialization proceeds as a background task, the volume set is fully accessible for system reads and writes. The operating system can instantly access to the newly created arrays without requiring a reboot and waiting the initialization complete. In Foreground Initialization, the initialization proceeds must be completed before the volume set ready for system accesses. In No Init, there is no initialization on this volume. Initialize the volume set you have just configured If you need to add additional volume set, using main menu Create Volume Set function.
5 6
7 8
45
BIOS CONFIGURATION
Step 1 Action To setup the hot spare (option), choose Raid Set Function from the main menu. Select the Create Hot Spare and press the Enter key to define the hot spare. Choose Raid Set Function from the main menu. Select Create Raid Set and press the Enter key. The Select a Drive For Raid Set window is displayed showing the SAS/ SATA drives connected to the 6Gb/s SAS RAID controller. Press the UP and DOWN arrow keys to select specific physical drives. Press the Enter key to associate the selected physical drive with the current RAID set. It is recommended that you use drives of the same capacity in a specific array. If you use drives with different capacities in an array, all drives in the RAID set will be set to the capacity of the smallest drive in the RAID set. The numbers of physical drives in a specific array determines which RAID levels that can be implemented in the array. RAID 0 requires 1 or more physical drives. RAID 1 requires at least 2 physical drives. RAID 10(1E) requires at least 3 physical drives. RAID 3 requires at least 3 physical drives. RAID 5 requires at least 3 physical drives. RAID 6 requires at least 4 physical drives. RAID 30 requires at least 6 physical drives. RAID 50 requires at least 6 physical drives. RAID 60 requires at least 8 physical drives. After adding the desired physical drives to the current RAID set, press the Enter to confirm the Create Raid Set function. An Edit The Raid Set Name dialog box appears. Enter 1 to 15 alphanumeric characters to define a unique identifier for this new RAID set. The default RAID set name will always appear as Raid Set. #. Press Enter key to finish the name editing. Press the Enter key when you are finished creating the current RAID set. To continue defining another RAID set, repeat step 3. To begin volume set configuration, go to step 8. Choose the Volume Set Function from the main menu. Select Create Volume Set and press the Enter key. Choose a RAID set from the Create Volume From Raid Set window. Press the Yes key to confirm the selection.
2 3 4
5 6
8 9
46
BIOS CONFIGURATION
10 Choosing Foreground (Fast Completion) Press Enter key to define fast initialization or selected the Background (Instant Available) or No Init (To Rescue Volume). In the Background Initialization, the initialization proceeds as a background task, the volume set is fully accessible for system reads and writes. The operating system can instantly access to the newly created arrays without requiring a reboot and waiting the initialization complete. In Foreground Initialization, the initialization proceeds must be completed before the volume set ready for system accesses. In No Init, there is no initialization on this volume. If space remains in the RAID set, the next volume set can be configured. Repeat steps 8 to 10 to configure another volume set.
11
Verify Password
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Note:
The manufacture default password is set to 0000; this password can be modified by selecting Change Password in the Raid System Function section.
47
BIOS CONFIGURATION
Option Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function Hdd Power Management Ethernet Configuration View System Events Clear Event Buffer Hardware Monitor System Information Description Create a default configuration based on the number of physical disk installed Create a customized RAID set Create a customized volume set View individual disk information Setup the RAID system configuration Manage HDD power based on usage patterns LAN port setting Record all system events in the buffer Clear all information in the event buffer Show the hardware system environment status View the controller system information
This password option allows user to set or clear the RAID controllers password protection feature. Once the password has been set, the user can only monitor and configure the RAID controller by providing the correct password. The password is used to protect the internal RAID controller from unauthorized entry. The controller will prompt for the password only when entering the main menu from the initial screen. The RAID controller will automatically return to the initial screen when it does not receive any command in five minutes.
48
BIOS CONFIGURATION
4). If you need to add an additional volume set, use the main menu Create Volume Set function. The total number of physical drives in a specific RAID set determine the RAID levels that can be implemented within the RAID set. Select Quick Volume/Raid Setup from the main menu; all possible RAID level will be displayed on the screen.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Total 5 Drives Volume Set Function Physical Drives Raid 0 Raid System Function Raid 1 + 0 Hdd Power Management 1 + 0 + Spare Raid Ethernet ConfigurationRaid 3 View System Events Raid 5 Clear Event Buffer Raid 3 + Spare Hardware Monitor Raid 5 + Spare System information Raid 6 Raid 6 + Spare ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
If volume capacity will exceed 2TB, controller will show the Greater Two TB Volume Support sub-menu.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu
Greater Two TB Volume Support Quick Volume/Raid Setup No No Raid Set Function Total 5 Drives Use 64bit LBA Volume Set Function Use 4K Block Raid 0 Physical Drives Raid 1 + 0 Raid System Function Raid Hdd Power Management 1 + 0 + Spare Raid 3 Ethernet Configuration View System Events Raid 5 Clear Event Buffer Raid 3 + Spare Hardware Monitor Raid 5 + Spare System information Raid 6 Raid 6 + Spare ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
No It keeps the volume size with max. 2TB limitation. Use 64bit LBA This option use 16 bytes CDB instead of 10 bytes. The maximum volume capacity up to 512TB.
49
BIOS CONFIGURATION
This option works on different OS which supports 16 bytes CDB. Such as: Windows 2003 with SP1 or later Linux kernel 2.6.x or later Use 4K Block It change the sector size from default 512 bytes to 4k bytes. The maximum volume capacity up to 16TB. This option works under Windows platform only. And it can not be converted to Dynamic Disk, because 4k sector size is not a standard format. For more details, please download pdf file from ftp://ftp. areca.com.tw/RaidCards/Documents/Manual_Spec/ Over2TB_050721.zip A single volume set is created and consumes all or a portion of the disk capacity available in this RAID set. Define the capacity of volume set in the Available Capacity popup. The default value for the volume set, which is 100% of the available capacity, is displayed in the selected capacity. use the UP and DOWN arrow key to set capacity of the volume set and press Enter key to accept this value. If the volume set uses only part of the RAID set capacity, you can use the Create Volume Set option in the main menu to define additional volume sets.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu
Available Capacity : 2400.0GB Quick Volume/Raid Setup Selected Capacity: 2400.0GB Raid Set Function Total 5 Drives Volume Set Function Physical Drives Raid 0 Raid Raid System Function 1 + 0 Raid 1 Hdd Power Management + 0 + Spare Raid Ethernet Configuration 3 Raid View System Events 5 Clear Event Buffer Raid 3 + Spare Hardware Monitor Raid 5 + Spare Raid 6 System information Raid 6 +Spare ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Stripe Size This parameter sets the size of the stripe written to each disk in a RAID 0, 1, 1E, 10, 5, or 6 logical drive. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512KB or 1M.
50
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu
Available Capacity : 2400.0GB Quick Volume/Raid Setup Selected Capacity: 2400.0GB Raid Set Function Total 5 Drives Select Strip Size Volume Set Function Raid 0 Physical Drives 4K Raid System FunctionRaid 1 + 0 8K Raid Hdd Power Management 1 + 0 + Spare 16K Raid 3 Ethernet Configuration 32K View System Events Raid 5 64K Clear Event Buffer Raid 3 + Spare 128K Raid 5 + Spare Hardware Monitor 256K System information Raid 6 512K Raid 6 +Spare 1M ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
A larger stripe size produces better-read performance, especially if your computer does mostly sequential reads. However, if you are sure that your computer performs random reads more often, select a smaller stripe size.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu
Available Capacity : 2400.0GB Quick Volume/Raid Setup Selected Capacity: 2400.0GB Raid Set Function Total 5 Drives Volume Set Function Physical Drives Raid 0 Create Vol/Raid Set Select Strip Size Raid System Function 1 + 0 Raid Hdd Power Management + 0 + Spare Raid 1 Yes 4K Ethernet Configuration 3 Raid No 8K View System Events Raid 5 16K Clear Event Buffer Raid 3 + Spare 32K Hardware Monitor Raid 5 + Spare 64K System informationRaid 6 128K Raid 6 +Spare ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Press Yes key in the Create Vol/Raid Set dialog box, the RAID set and volume set will start to initialize it.
51
BIOS CONFIGURATION
Select Foreground (Faster Completion) or Background (Instant Available) for initialization and No Init (To Rescue Volume) for recovering the missing RAID set configuration.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu
Available Capacity : 2400.0GB Quick Volume/Raid Setup Selected Capacity: 2400.0GB Raid Set Function Total 5 Drives Volume Set Function Raid 0 Physical Drives Raid 1 + 0 Raid System Function Select Strip Size Initialization Mode Raid 1 + 0 + Spare Hdd Power Management Raid 3 4K Foreground (Faster Completeion) Ethernet Configuration Raid 5 8K Background (Instant Available) View System Events Raid 3 + Spare No Init (To Rescue Volume) 16K Clear Event Buffer Raid 5 + Spare 32K Hardware Monitor Raid 6 64K System information Raid 6 +Spare 128K ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
52
BIOS CONFIGURATION
3.7.2.1 Create Raid Set
The following is the RAID set features for the 6Gb/s SAS RAID controller. 1. Up to 32 disk drives can be included in a single RAID set. 2. Up to 128 RAID sets can be created per controller, but RAID level 30 50 and 60 only can support eight sub-volumes (RAID set). To define a RAID set, follow the procedures below: 1). Select Raid Set Function from the main menu. 2). Select Create Raid Set from the Raid Set Function dialog box. 3. A Select IDE Drive For Raid Set window is displayed showing the SAS/SATA drives connected to the current controller. Press the UP and DOWN arrow keys to select specific physical drives. Press the Enter key to associate the selected physical drive with the current RAID set. Repeat this step; the user can add as many disk drives as are available to a single RAID set. When finished selecting SAS/SATA drives for RAID set, press Esc key. A Create Raid Set Confirmation screen will appear, select the Yes option to confirm it.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Raid Set FunctionSelect IDE Drives For Raid Set Volume Set Function [*]E#1Solt#1 : 400.1GB : Hitachi HDT725040VLA360 Physical Drives Create Raid Set [ ]E#1Solt#2 : 500.1GB : HDS725050KLA360 Raid System Delete Raid Set Function [ Raid Hdd Power Management Set : 500.1GB : ST3500630NS Expand ]E#1Solt#3 [ ]E#1Solt#4 : 400.1GB : Hitachi HDT725040VLA360 Ethernet Configuration Set Offline Raid [ View SystemActivate]E#1Solt#5 : 400.1GB : Hitachi HDT725040VLA360 Events Raid Set Clear Event Buffer [ Hot Spare : 400.1GB : Hitachi HDT725040VLA360 Create ]E#1Solt#6 Hardware Monitor [ Hot Spare : 500.1GB : HDS725050KL360 Delete ]E#1Solt#7 System information[ ]E#1Solt#8 : 500.1GB : ST3500630NS Rescue Raid Set Raid Set Information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
4. An Edit The Raid Set Name dialog box appears. Enter 1 to 15 alphanumeric characters to define a unique identifier for the RAID set. The default RAID set name will always appear as Raid Set. #. 5. Repeat steps 3 to define another RAID sets.
53
BIOS CONFIGURATION
Note:
To create RAID 30/50/60 volume, you need create multiple RAID sets (up to 8 RAID sets) first with the same disk members on each RAID set. The max no. disk drives per volume set: 32 for RAID 0/1/10/3/5/6 and 128 for RAID 30/50/60.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Set Function IDE Drives For Raid Set Select Raid Volume Set Function Physical Drives [*]E#1Solt#1 : 400.1GB : Hitachi HDT725040VLA360 Create Raid Set The Raid Set Name Edit [ ]E#1Solt#2 Raid System Delete Raid Set: 500.1GB : HDS725050KLA360 Function R [ ]E#1Solt#3aid Set # 000 Hdd Power Management Set500.1GB : ST3500630NS Expand Raid : [ ]E#1Solt#4 Ethernet Configuration Set: 400.1GB : Hitachi HDT725040VLA360 Offline Raid [ ]E#1Solt#5 : View System Activate Raid Set400.1GB : Hitachi HDT725040VLA360 Events [ ]E#1Solt#6 : 400.1GB : Hitachi HDT725040VLA360 Clear Event Buffer Hot Spare Create [ ]E#1Solt#7 : Hardware Monitor Hot Spare500.1GB : HDS725050KL360 Delete [ ]E#1Solt#8 : System information Raid Set500.1GB : ST3500630NS Rescue Raid Set Information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
54
BIOS CONFIGURATION
3.7.2.3 Expand Raid Set
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Function Raid Set Raid Set Function Volume Set Function Select IDE Drives For Raid Set Expansion Create Physical Drives Raid Set Delete Raid :3/3 Disk: Set To Raid System Function Set # 000Select RaidNormal Expansion ExpandRaid Set # 001 :9/9 Disk: Normal Sure? Expand Raid Set Raid Set Are you Hdd Power Management Raid Set # 000 Offline Raid Ethernet Configuration Set # 003 :8/8 Disk: Normal Yes Activate Raid Set 004 :3/3 Disk: Normal Raid Set # View System Events No Create RaidSpare 005 :3/3 Disk: Normal Hot Set # Clear Event Buffer Delete Hot Spare 006 :3/3 Disk: Normal Hardware MonitorRaid Set # RescueRaid Set # 007 :3/3 Disk: Normal Raid Set System information Raid Set Information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Instead of deleting a RAID set and recreating it with additional disk drives, the Expand Raid Set function allows the users to add disk drives to the RAID set that have already been created. To expand a RAID set: Select the Expand Raid Set option. If there is an available disk, then the Select SAS/SATA Drives For Raid Set Expansion screen appears. Select the target RAID set by clicking on the appropriate radius button. Select the target disk by clicking on the appropriate check box. Press the Yes key to start the expansion on the RAID set. The new additional capacity can be utilized by one or more volume sets. The volume sets associated with this RAID set appear for you to have chance to modify RAID level or stripe size. Follow the instruction presented in the Modify Volume Set to modify the volume sets; operation system specific utilities may be required to expand operating system partitions.
Note:
1. Once the Expand Raid Set process has started, user can not stop it. The process must be completed. 2. If a disk drive fails during RAID set expansion and a hot spare is available, an auto rebuild operation will occur after the RAID set expansion completes. 3. RAID 30/50/60 doesn't support the "Expand Raid Set".
55
BIOS CONFIGURATION
Note:
4. RAID set expansion is a quite critical process, we strongly recommend customer backup data before expand. Unexpected accident may cause serious data corruption.
Migrating
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Raid Set FunctionThe Raid Set Information Volume Set Function Physical Drives Raid Set Raid Set Name : Raid Set # 00 Create Raid System Function Set Member Disks : 2 Delete Raid Hdd PowerExpand Raid Set Raid State Management : Migrating Ethernet Configuration Set Total Capacity : 800.0GB Offline Raid View System Events : 800.0GB Activate Raid Set Free Capacity Clear Event BufferHot Spare Min Member Disk Size : 400.0GB Create Hardware Monitor Hot Spare Member Disk Channels : .E1S1.E1S2. Delete System informationRaid Set Rescue Raid Set Information Raid Set information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Migration occurs when a disk is added to a RAID set. Migrating state is displayed in the RAID state area of The Raid Set Information screen when a disk is being added to a RAID set. Migrating state is also displayed in the associated volume state area of the Volume Set Information which belongs this RAID set.
56
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Set Function Raid Volume Set Function Create Raid Set Physical Drives Offline Raid Set Raid SystemDelete Raid Set Function Are you Sure? Expand Select Raid Set To Offline Raid Hdd Power Management Set Offline Raid Yes Yes es Ethernet Configuration Set 000 Set No View SystemActivate Raid Raid Set # No Events Create Clear Event Buffer Hot Spare Delete Hardware Monitor Hot Spare Rescue System information Raid Set Raid Set Information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
57
BIOS CONFIGURATION
3.7.2.6 Create Hot Spare
When you choose the Create Hot Spare option in the Raid Set Function, all unused physical devices connected to the current controller will result in the screen. Select the target disk by clicking on the appropriate check box. Press the Enter key to select a disk drive and press Yes in the Create Hot Spare to designate it as a hot spare. The Create Hot Spare gives you the ability to define a global or dedicated hot spare. Unlike Global Hot Spare which can be used with any RAID sets, Dedicated Hot Spare can only be used with a specific RAID set or Enclosure. When a disk drive fails in the RAID set or enclosure with a dedicated hot spare is pre-set, data on the disk drive is rebuild automatically on the dedicated hot spare disk.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Raid Set Function Select Drives For HotSpare Volume Set Function Set Raid [*]E#1Solt#01 : 400.1GB : Hitachi HDT725040VLA360 Physical Drives Delete Raid Set Select Hot Spare Type [ Raid System Function ]E#1Solt#02 : 500.1GB : HDS725050KLA360 Expand Raid Set [ ]E#1Solt#03 : 500.1GB : ST3500630NS Hdd Power Management Global Offline Raid Set [ Ethernet Configuration ]E#1Solt#04 : 400.1GB : Hitachi HDT725040VLA360 Dedicated To RaidSet Activate Raid Set ]E#1Solt#05 View System Events [ Hot Spare : 400.1GB : Hitachi HDT725040VLA360 Dedicated To Enclosure Create Create Hot Spare ]E#1Solt#06 Clear Event Buffer [ Hot Spare : 400.1GB : Hitachi HDT725040VLA360 Delete [ ]E#1Solt#07 : 500.1GB : HDS725050KL360 Hardware Monitor Rescue Raid Set [ ]E#1Solt#08 : System information Set Information 500.1GB : ST3500630NS Raid ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
58
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Raid Set Quick Volume/Raid SetupFunction Raid Set Function Raid Volume Set Function Set Physical Drives Delete Raid Set Expand Raid Set Raid System Function Offline Raid Set Hdd Power Management The HotSpare Device To Be Deleted Select Activate Delete HotSpare? Ethernet Configuration Raid Set [Create ]E#1Solt#3 500.1GB ST3500630NS View System Events Hot Spare [*]E#1Solt#3 :: 500.1GB :: ST3500630NS Delete Hot Spare Delete Hot Spare Yes Clear Event Buffer No Rescue Raid Set Hardware Monitor Raid Set Information System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Note:
Please contact us to make sure if you need to use rescue function. Improperly usage may cause configuration corruption.
59
BIOS CONFIGURATION
Once can manually fail a drive, which is useful in kill-off slow speed disk. There is nothing physically wrong with the disk. A manually failed the drive can be rebuilt by the hot spare and brought back on-line.
60
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function Hdd Power Management Ethernet Configuration View System Events Clear Event Buffer Hardware Monitor System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
The following is the volume set features for the 6Gb/s SAS RAID controller. 1). Volume sets of different RAID levels may coexist on the same RAID set and up to 128 volume sets per controller. 2). Up to 128 volume sets can be created in a RAID set. 3). The maximum addressable size of a single volume set is not limited to 2TB, because the controller is capable of 64-bit LBA mode. However the operating system itself may not be capable of addressing more than 2TB. See Areca website ftp://ftp.areca.com.tw/RaidCards/Documents/Manual_Spec/ Over2TB_050721.ZIP file for details.
61
BIOS CONFIGURATION
To create a volume set, following the steps: 1). Select the Volume Set Function from the main menu. 2). Choose the Create Volume Set from Volume Set Functions dialog box screen. 3). The Create Volume From Raid Set appears. This screen displays the existing arranged RAID sets. Select the RAID set number and press the Enter key. The Volume Creation dialog is displayed in the screen. 4). The new create volume set attribute allows user to select the Volume Name, Raid level, Capacity, Strip Size, SCSI Channel/ SCSI ID/SCSI Lun, Cache Mode, Tagged Command Queuing.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Volume Creation Raid Set Function Set Functions Volume Volume Set FunctionVolume From :Raid Set Create ARC-1880-VOL# 000 CreateVolumeName Volume Set CreatVolume Set Physical Drives Raid Level Create Raid30/50/60 : 5 Set Raid SystemRaid Capacity :3/3 Disk: Normal Function# 000 : 2400.0GB Delete Volume :9/9 Raid Set # 001 Hdd Power Management Set Disk: Normal Modify Volume :8/8 : 64K Raid Stripe Size Set # Ethernet Configuration 003 Set Disk: Normal Check Volume Set Disk: Normal Raid SCSI Channel : 0 Set View System Events # 004 :3/3 : 0 Stop Volume Check Disk: Normal Raid SCSI 005 :3/3 Set ID Clear Event Buffer # LUN Display Volume :3/3 Disk: Normal Raid SCSI 006 Info. : 0 Hardware MonitorSet # Mode Cache007 :3/3 Disk: Normal : Write Back Raid Set # System information Queuing Tag : Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
5. After completed the modification of the volume set, press the Esc key to confirm it. An Initialization Mode screen appears. Select Foreground (Faster Completion) for faster initialization of the selected volume set. Select Background (Instant Available) for normal initialization of the selected volume set. Select No Init (To Rescue Volume) for no initialization of the selected volume.
62
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Volume Creation Raid Set Function Volume Set Functions Volume Set Function Volume Name : Volume Set # 000 Physical Drives Volume Raid Level Create VolumeSet Set Creat :5 Raid System Function Create Raid30/50/60 Capacity : 2400.0GB Hdd Power Delete Volume Set Size Management Stripe 64K Create Volume:From Raid Set Ethernet Configuration SCSI Channel : 0 Modify Volume Set Initialization Mode View System Events Check VolumeSCSI ID Raid Set0 # 00 Set : Clear EventStop Volume Check LUN Foreground (Faster Completion) Buffer SCSI : 0 Hardware Monitor Volume Info. ModeBackground (Instant Available) Display Cache : Write Back System information Tag Queuing No Init (To Rescue Volume) : Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
6. Repeat steps 3 to 5 to create additional volume sets. 7. The initialization percentage of volume set will be displayed at the button line.
Volume Name
The default volume name will always appear as ARC-1880-VOL #. You can rename the volume set providing it does not exceed the 15 characters limit.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Volume Creation Raid Set Function Volume Set Volume Set Function Functions Volume Name : ARC-1880-VOL# 000 Physical Drives Volume Set Raid Level Create Volume Set : 5 Creat Raid System Function Capacity : 2400.0GB Create Raid30/50/60 Hdd Power Management SetStripe Size : 64K Delete Volume Ethernet Configuration SetSCSI Channel : 0 Modify Volume View System Events SCSI Volume From0Raid Set ID : Check Volume Set Create Edit The Volume Name Clear Event Buffer SCSI LUN : 0 Stop Volume Check Hardware Monitor Volume Info. # Write Back CacheRaid Set A RC-1880-VOL# 00 Mode : 00 Display System information Tag Queuing : Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
63
BIOS CONFIGURATION
Set the Raid Level for the volume set. Highlight Raid Level and press the Enter key. The available RAID levels for the current volume set are displayed. Select a RAID level and press the Enter key to confirm.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Volume Creation Volume Set Function Physical Drives Volume Name : ARC-1880-VOL # 000 :: 5 Raid System Function Raid Level 5 Hdd Power ManagementCapacity : 2400.0GB Select Raid Level Ethernet Configuration Stripe Size : 64K View System Events SCSI Channel : 0 0 Clear Event Buffer SCSI ID : 0 0+1 Hardware Monitor SCSI LUN : 0 3 System information Cache Mode : Write Back 5 Tag Queuing : Enabled 6 ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Raid Level
The maximum available volume size is the default value for the first setting. Enter the appropriate volume size to fit your application. The capacity value can be increased or decreased by the UP and DOWN arrow keys. The capacity of each volume set must be less than or equal to the total capacity of the RAID set on which it resides.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Volume Creation Volume SetVolume Set Functions Function Volume Name : ARC-1880-VOL# 000 Physical Drives CreateVolume Set Level Set Creat VolumeRaid : 5 Raid System Function 2400.0GB : 160.1GB Hdd Power Create Raid30/50/60 Management Capacity Delete Volume Set : 64K Ethernet Configuration Stripe Size From Raid Set Create Volume Modify VolumeSCSI Channel : 0 Capacity : 2400.0GB Set View System Events Available Check SetRaid 0 Clear Event BufferVolumeSCSI ID Set # : 00 Stop Volume Check LUN Selected Capacity : 2400.0GB SCSI : 0 Hardware Monitor Display Cache : Write Back System informationVolume Info. Mode Tag Queuing : Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Capacity
64
BIOS CONFIGURATION
If volume capacity will exceed 2TB, controller will show the "Greater Two TB Volume Support" sub-menu.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Greater Two TB Volume Support No No Use 64bit LBA Use 4K Block
Quick Volume/Raid Setup Raid Set Function Total 5 Drives Volume Set Function Raid 0 Physical Drives Raid 1 + 0 Raid System Function Raid Hdd Power Management 1 + 0 + Spare Raid 3 Ethernet Configuration View System Events Raid 5 Clear Event Buffer Raid 3 + Spare Hardware Monitor Raid 5 + Spare System information Raid 6 Raid 6 + Spare
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
No It keeps the volume size with max. 2TB limitation. Use 64bit LBA This option uses 16 bytes CDB instead of 10 bytes. The maximum volume capacity up to 512TB. This option works on different OS which supports 16 bytes CDB. Such as: Windows 2003 with SP1 or later Linux kernel 2.6.x or later Use 4K Block It change the sector size from default 512 bytes to 4k bytes. The maximum volume capacity up to 16TB. This option works under Windows platform only. And it can not be converted to Dynamic Disk, because 4k sector size is not a standard format. For more details, please download pdf file from ftp://ftp. areca.com.tw/RaidCards/Documents/Manual_Spec/ Over2TB_050721.zip
65
BIOS CONFIGURATION
Stripe Size
This parameter sets the size of segment written to each disk in a RAID 0, 1, 1E, 10, 5, 6, 50 or 60 logical drive. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512KB or 1M.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Volume Creation Volume Set Volume Set Function Functions Physical Drives Volume SetVolume Name : ARC-1880-VOL#000 CreateVolume Set Creat Raid Level : 5 Raid System Function Create Raid30/50/60 Capacity : 2400.0GB Hdd Power Management Delete Volume SetStripe Size : 64K Create Volume From Raid Set Ethernet Configuration Modify Volume SetSCSI Channel : 0 View System Events Check Volume SetRaid Set # 00: 0 SCSI ID Clear Event Buffer Stop Volume Check SCSI LUN : 0 Hardware Monitor Display Volume Info. Cache Mode : Write Back System information Tag Queuing : Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
SCSI Channel
The 6Gb/s SAS RAID controller function simulates a external SCSI RAID controller. The host bus represents the SCSI channel. Choose the SCSI Channel. A Select SCSI Channel dialog box appears; select the channel number and press the Enter key to confirm it.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Volume Raid Set Function Set Functions Creation Volume Volume Set Function CreateVolume Set Set Creat VolumeVolume Name : ARC-1880-VOL#000 Physical Drives Raid Level :5 Create Raid30/50/60 Raid System Function Capacity : 2400.0GB Create Volume From Raid Set Delete Volume Hdd Power Management Set : 64K Modify VolumeStripe Size Set Ethernet Configuration Raid Set # : 00 0 Check VolumeSCSI Channel Set Channel View System Events SCSI ID : 0 Clear Event Stop Volume Check Buffer SCSI LUN : 0 Display Hardware Monitor Volume Info. Cache Mode : Write Back System information Tag Queuing : Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
66
BIOS CONFIGURATION
SCSI ID
Each device attached to the 6Gb/s SAS RAID controller, as well as the 6Gb/s SAS RAID controller itself, must be assigned a unique SCSI ID number. A SCSI channel can connect up to 15 devices. It is necessary to assign a SCSI ID to each device from a list of available SCSI IDs.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Volume Raid Set Function Set Functions Creation Volume Volume Set Function CreateVolume Set Set Creat VolumeVolume Name : ARC-1880-VOL#000 Physical Drives Raid Level : 5 Create Raid30/50/60 Raid System Function Capacity From Raid Set : 2400.0GB Create Delete Volume Volume Hdd Power Management Set : 64K Modify VolumeStripe Size Set Ethernet Configuration Raid Set # :00 0 Check Volume SCSI Channel Set View System Events SCSI ID 0 :: 0 Clear Event Stop Volume Check Buffer SCSI LUN : 0 Display Hardware Monitor Volume Info. Cache Mode : Write Back System information Tag Queuing : Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Each SCSI ID can support up to 8 LUNs. Most 6Gb/s SAS controllers treat each LUN as if it were a SAS disk.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Volume Raid Set Function Set Functions Creation Volume Volume Set Function Create VolumeVolume Name : ARC-1880-VOL#000 Set Creat Physical Drives Volume Set Raid Level : 5 Create Raid30/50/60 Raid System Function : 2400.0GB Create Volume Delete VolumeCapacity From Raid Set Set Hdd Power Management : 64K Modify VolumeStripe Size Set Ethernet Configuration Raid Set # SCSI Channel :00 0 Check Volume Set View System Events SCSI ID : 0 Clear Event Stop Volume Check LUN Buffer SCSI 0 :: 0 Display Hardware Monitor Volume Info. Cache Mode : Write Back System information Tag Queuing : Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
SCSI LUN
67
BIOS CONFIGURATION
Cache Mode
User can set the cache mode to either Write Through or Write Back.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Volume Raid Set Function Set Functions Creation Volume Volume Set Function CreateVolume Set Set Creat VolumeVolume Name : ARC-1880-VOL#000 Physical Drives Raid Level : 5 Create Raid30/50/60 Raid System Function Capacity From Raid Set : 2400.0GB Create Delete Volume Volume Hdd Power Management Set Stripe Size : 64K Modify Volume Set Ethernet Configuration Raid Set # 00 Check Volume SCSI Channel : 0 Set View System Events SCSI ID : 0 Clear Event Stop Volume Check Buffer SCSI LUN : 0 Display Hardware Monitor Volume Info. Mode : : Write Back Cache Mode Cache : Write Back System information Tag Queuing : Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Tag Queuing
This option, when enabled, can enhance overall system performance under multi-tasking operating systems. The Command Tag (Drive Channel) function controls the SAS command tag queuing support for each drive channel. This function should normally remain enabled. Disabled this function only when using older drives that do not support command tag queuing.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Volume Raid Set Function Set Functions Creation Volume Volume Set Function CreateVolume Set VolumeVolume Name : ARC-1880-VOL#000 Set Creat Physical Drives Raid Level : 5 Create Raid30/50/60 Raid System Function : 2400.0GB Create Volume Delete VolumeCapacity From Raid Set Hdd Power Management Set Stripe Size : 64K Modify Volume Set Ethernet Configuration Raid Set # SCSI Channel :00 0 Check Volume Set View System Events SCSI ID : 0 Clear Event Stop Volume Check Buffer SCSI LUN : 0 Display Hardware Monitor Volume Info. Cache Mode : Write Back System information Tag Queuing :: Enabled Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
68
BIOS CONFIGURATION
3.7.3.2 Create Raid30/50/60 (Volume Set 30/50/60)
To create 30/50/60 volume set from RAID set group, move the cursor bar to the main menu and click on the Create Raid30/50/60 link. The Select The Raid Set To Create Volume On It screen will show all RAID set number. Tick on the RAID set numbers (same disk No per RAID set) that you want to create and then click on it. The created new volume set attribute option allows users to select the Volume Name, Capacity, Raid Level, Strip Size, SCSI ID/LUN, Cache Mode, and Tagged Command Queuing. The detailed description of those parameters can refer to section 3.7.3.1. User can modify the default values in this screen; the modification procedures are in section 3.7.3.4.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Set Functions Volume Volume Set Function Raid 30/50/60 Create Free(Capacity) Create Volume Set Physical Drives CreateRaid Creat Raid30/50/60 [*] Raid30/50/60 Raid System Function Set # 000 3000.0GB (3000.0GB) DeleteRaid Set # 001 1000.0GB (8000.0GB) [*] Volume Hdd Power Management Set Modify Volume Set [ ] Raid Ethernet Configuration Set # 003 150.0GB ( 240.0GB) CheckRaid [ ] Volume # 004 150.0GB ( 240.0GB) View System Events Set Set [ Clear Event Stop ]Volume Check 150.0GB ( 240.0GB) BufferRaid Set # 005 Display Volume Info. [] Hardware Monitor Raid Set # 006 150.0GB ( 240.0GB) [ ] Raid Set # 007 150.0GB ( 240.0GB) System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Note:
Raid Level 30 50 and 60 can support up to eight RAID sets (four pairs).
69
BIOS CONFIGURATION
3.7.3.3 Delete Volume Set
To delete volume set from a RAID set, move the cursor bar to the Volume Set Functions menu and select the Delete Volume Set item, then press the Enter key. The Volume Set Functions menu will show all Raid Set # items. Move the cursor bar to a RAID set number, then press the Enter key to show all volume sets within that RAID set. Move the cursor to the volume set number that is to be deleted and press the Enter key to delete it.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller
Main Menu Quick Volume/Raid Setup Raid Set Function Set Functions Volume Volume Set Function Create Volume Set Physical Drives Create Raid30/50/60 Raid System Function Select Volume To Delete Delete Volume Hdd Power Management Set Modify Volume Set Ethernet ConfigurationARC-1880-VOL#004 (Raid 30/50/60 Vol) Delete Volume Set Check Volume Set View System Events ARC-1880-VOL#009 (Raid 30/50/60 Vol) ARC-1880-VOL#014 (Raid 30/50/60 Vol) Clear Event Stop Volume Check Buffer Display ARC-1880-VOL#002 (Raid Set #Yes 001) Hardware Monitor Volume Info. No System information ARC-1880-VOL#003 (Raid Set # 002) ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
70
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller
Main Menu Quick Volume/Raid Setup Volume Modification Raid Set Function Set Functions Set Name : ARC-1880-VOL # 000 Volume Volume Volume Set Function Raid : Modify Select Create Volume Set Level Volume To 5 Physical Drives Capacity : 2000.0GB Create Raid30/50/60 Raid System FunctionARC-1880-VOL#004 (Raid 30/50/60 Vol) : 64K Delete VolumeStripe Size Set Hdd Power Management SCSI Channel (Raid030/50/60 Vol) ARC-1880-VOL#009 : Modify Volume Set Ethernet Configuration ARC-1880-VOL#014 (Raid030/50/60 Vol) SCSI ID : Check Volume Set View System Events ARC-1880-VOL#002 (Raid Set # 001) SCSI LUN : 1 Clear Event Stop Volume Check Mode Buffer ARC-1880-VOL#003 (Raid Set # 002) Cache : Write-Back Display Hardware Monitor Volume Info. Tag Queuing : Enabled System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
71
BIOS CONFIGURATION
3.7.3.4.2 Volume Set Migration
Migrating occurs when a volume set is migrating from one RAID level to another, when a volume set strip size changes, or when a disk is added to a RAID set. Migration state is displayed in the volume state area of the Volume Set Information screen.
Note:
Power failure may damage the migration data. Please backup the RAID data before you start the migration function.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller
The Volume Set Information Quick Volume/Raid Setup Volume Set Name : ARC-1880-VOL # 004 Raid Set Function Set Functions Volume Raid Set Name : Raid Set # 02 Select Volume To Display Volume Set Function Volume Capacity : 1200.0GB Create Volume Set Physical Drives Volume State : Migration Create Raid30/50/60 ARC-1880-VOL#004 (Raid 30/50/60 Vol) ARC-1880-VOL#009 (Raid 30/50/60 Vol) Raid System Function SCSI CH/ID/Lun : 0/0/4 Delete Volume Set Hdd Power Management VOL#003R30Vo14-1(Raid Set # 002) Raid Level : 5 Modify Volume Set VOL#003R30Vo14-2(Raid Set # 003) Ethernet Configuration Stripe Size : 64K Check VolumeVOL#003R30Vo14-3(Raid Set # 004) Set View System Events Block Size : 512 Bytes Stop Volume Check Clear Event Buffer Member Disk : 5 Display Hardware Monitor Volume Info. Cache Attribute : Write-Back System information Tag Queuing : Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Main Menu
72
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller
Main Menu Quick Volume/Raid Setup Raid Set Function Set Functions Volume Volume Set Function Create Volume Select Volume To Check Set Physical Drives Create Raid30/50/60 Raid System Function ARC-1880-VOL#004 (Raid 30/50/60 Vol) Delete Volume Check The Volume ? Hdd Power Management Set ModifyARC-1880-VOL#009 (Raid 30/50/60 Vol) Volume Set Ethernet Configuration Yes Check ARC-1880-VOL#014 (Raid 30/50/60 Vol) Volume Set View System Events ARC-1880-VOL#002 (Raid Set # 001) No Clear Event Stop Volume Check Buffer ARC-1880-VOL#003 (Raid Set # 002) Display Hardware Monitor Volume Info. System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
The Volume Set Information Quick Volume/Raid Setup Volume Set Name : ARC-1880-VOL # 004 Raid Set Function Set Functions Volume Raid Set Name : Raid Set # 02 Select Volume To Display Volume Set Function Volume Capacity : 1200.0GB Create Volume Set Physical Drives Volume State : Migration Create Raid30/50/60 ARC-1880-VOL#004 (Raid 30/50/60 Vol) ARC-1880-VOL#009 (Raid 30/50/60 Vol) Raid System Function SCSI CH/ID/Lun : 0/0/4 Delete Volume Set Hdd Power Management VOL#003R30Vo14-1(Raid Set # 002) Raid Level : 5 Modify Volume Set VOL#003R30Vo14-2(Raid Set # 003) Ethernet Configuration Stripe Size : 64K Check VolumeVOL#003R30Vo14-3(Raid Set # 004) Set View System Events Block Size : 512 Bytes Stop Volume Check Clear Event Buffer Member Disk : 5 Display Hardware Monitor Volume Info. Cache Attribute : Write-Back System information Tag Queuing : Enabled ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Main Menu
73
BIOS CONFIGURATION
3.7.4 Physical Drives
Choose this option from the main menu to select a physical disk and perform the operations listed above. Move the cursor bar to an item, then press Enter key to select the desired function.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function Hdd Power Management Ethernet Configuration View System Events Clear Event Buffer Hardware Monitor System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
74
BIOS CONFIGURATION
3.7.4.2 Create Pass-Through Disk
A pass-through disk is not controlled by the 6Gb/s SAS RAID controller firmware and thus cannot be a part of a volume set. The disk is available directly to the operating system as an individual disk. It is typically used on a system where the operating system is on a disk not controlled by the 6Gb/s SAS RAID controller firmware. The SCSI Channel/SCSI ID/SCSI LUN, Cache Mode, and Tag Queuing must be specified to create a passthrough disk.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Physical Drive Function Raid Set Function Select The Drive Volume Set Function View Drive Information Drive Physical Drives: 500.1GB Pass-Through Disk Attribute E#1Solt#7Pass-Through Free : HDS725050KLA360 Create : 500.1GB E#1Solt#2 Information :: Up..360 RaidE#1Solt#8Pass-Through Disk System Function Modify : 500.1GB : Free : ST3500630NS Hdd E#1Solt#9 : 400.1GB SCSI Channel : 0 HDT725040VLA360 Power Management : Disk Delete Pass-Through Free : Hitachi SCSI ID : 0 Ethernet Configuration Drive Free Identify Selected E#1Solt#10 : 400.1GB : : Hitachi HDT725040VLA360 Create Pass-Through : 0 View System Events Identify Enclosure SCSI LUN : Hitachi HDT725040VLA360 E#1Solt#11 : 400.1GB : Free Cache Mode: HDS725050KL360 : WriteYes Back Clear Event Buffer E#1Solt#12 : 500.1GB : Free No Tag Queuing : Enabled Hardware Monitor System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
75
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Physical Drive Function Raid Set Function VolumeView Function Set Drive Information Drive InformationDelete Pass-Through Physical Drives Create Pass-Through Disk Are Select Raid System Function The Drive you Sure? Modify Pass-Through Disk Yes Hdd Power Management Delete Pass-Through Yes E#1Slot#2 : 500.1GB Pass Through HDS725050KL360 No Ethernet Configuration Drive Identify Selected No View System Events Identify Enclosure Clear Event Buffer Hardware Monitor System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
76
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Physical Drive function Raid Set Function Volume Set Function View Drive Information Drive Information Physical Drives Create Pass-Through Disk Select The Enclosure Raid System Function Modify Pass-Through Disk Hdd Power Management Enclosure#1 : Delete Pass-Through Disk ARECA SAS RAID Adapter V1.0 Ethernet Configuration Drive : Areca x28-05.75.1.37 000 Enclosure#2 Identify Selected View System Events Enclosure#3 : Areca x28-05.75.1.37 000 Identify Enclosure Clear Event BufferEnclosure#4 : Areca x28-05.75.1.37 000 Hardware Monitor Enclosure#5 : Areca x28-05.75.1.37 000 System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
77
BIOS CONFIGURATION
3.7.5.1 Mute The Alert Beeper
The Mute The Alert Beeper function item is used to control the SAS RAID controller beeper. Select Yes and press the Enter key in the dialog box to turn the beeper off temporarily. The beeper will still activate on the next event.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Raid System Function
Mute The Alert Beeper Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password Alert Beeper Mute JBOD/RAID Function Physical Drives Yes Raid System Function Background Task Priority SATA NCQ Hdd Power Management Support No HDD Read Ethernet Configuration Ahead Cache Volume View System Events Data Read Ahead Hdd Clear Event Buffer Queue Depth Setting Empty Hardware Monitor HDD Slot LED Controller Fan Detection System information Auto Activate Raid Set Disk Write Cache Mode Capacity Truncation ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Mute The Alert Beeper Quick Volume/Raid Setup Alert Alert Beeper Setting Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function Alert Beeper Setting Physical Drives Raid System Function Background Task Priority Disabled SATA NCQ Hdd Power Management Support Enabled HDD Read Ethernet Configuration Ahead Cache Volume View System Events Data Read Ahead Hdd Clear Event Buffer Queue Depth Setting Empty Hardware Monitor HDD Slot LED Controller Fan Detection System information Auto Activate Raid Set Disk Write Cache Mode Capacity Truncation ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
78
BIOS CONFIGURATION
3.7.5.3 Change Password
The manufacture default password is set to 0000. The password option allows user to set or clear the password protection feature. Once the password has been set, the user can monitor and configure the controller only by providing the correct password. This feature is used to protect the internal RAID system from unauthorized access. The controller will check the password only when entering the main menu from the initial screen. The system will automatically go back to the initial screen if it does not receive any command in 5 minutes. To set or change the password, move the cursor to Raid System Function screen, press the Change Password item. The Enter New Password screen will appear. Do not use spaces when you enter the password, If spaces are used, it will lock out the user. To disable the password, only press Enter key in both the Enter New Password and Re-Enter New Password column. The existing password will be cleared. No password checking will occur when entering the main menu.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Raid System Function
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Change Password Volume Set Function Password JBOD/RAID Function Physical Drives Raid System Function Background Task Priority SATA NCQ Hdd Power Management Support HDD Read Ethernet Configuration Ahead Cache Volume View System Events Data Read Ahead Hdd Clear Event Buffer Queue Depth Setting Empty Hardware Monitor HDD Slot LED Controller Fan Detection System information Auto Activate Raid Set Disk Write Cache Mode Capacity Truncation
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
79
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Raid System Function
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function JBOD/RAID Function Physical Drives Raid System Function Background Task Priority SATA NCQ Hdd Power Management Support HDD Read Ethernet Configuration Ahead Cache Volume View System Events Data Read Ahead Hdd Clear Event Buffer Queue Depth Setting Empty Hardware Monitor HDD Slot LED Controller Fan Detection System information Auto Activate Raid Set Disk Write Cache Mode Capacity Truncation
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function Physical Drives Background Task Priority Raid System Function Background Task Priority Background Task Priority UltraLow(5%) SATA NCQ Hdd Power Management Support Low(20%) HDD Read Ethernet Configuration Ahead Cache Medium(50%) Volume View System Events Data Read Ahead High(80%) Hdd Queue Depth Setting Clear Event Buffer Empty Hardware Monitor HDD Slot LED Controller Fan Detection System information Auto Activate Raid Set Disk Write Cache Mode Capacity Truncation ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
80
BIOS CONFIGURATION
3.7.5.6 SATA NCQ Support
RAID controller supports both SAS and SATA disk drives. The SATA NCQ allows multiple commands to be outstanding within a drive at the same time. Drives that support NCQ have an internal queue where outstanding commands can be dynamically rescheduled or re-ordered, along with the necessary tracking mechanisms for outstanding and completed portions of the workload. The 6Gb/s SAS RAID controller allows the user to select the SATA NCQ support: Enabled or Disabled.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Raid System Function
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function Physical Drives Raid System Function Background Task Priority SATA NCQ Hdd Power Management Support HDD Read Ethernet Configuration Ahead Cache Volume View System Events Data Read Ahead Hdd Clear Event Buffer Queue Depth Setting Empty Hardware Monitor HDD Slot LED Controller Fan Detection System information Auto Activate Raid Set Disk Write Cache Mode Capacity Truncation
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function Physical Drives Raid System Function Background Task Priority HDD Read Ahead Cache SATA NCQ Hdd Power Management Support Ethernet Configuration Ahead Cache Enabled HDD Read Enabled Volume View System Events Data Read Ahead Disable Maxtor Hdd Clear Event Buffer Queue Depth Setting Disabled Empty Hardware Monitor HDD Slot LED Controller Fan Detection System information Auto Activate Raid Set Disk Write Cache Mode Capacity Truncation ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
81
BIOS CONFIGURATION
3.7.5.8 Volume Data Read Ahead
The volume read data ahead parameter specifies the controller firmware algorithms which process the read ahead data blocks from the disk. The "Data Read Ahead" parameter is normal by default. To modify the value, you must set it from the " Raid System Function" using the 'Volume Data Read Ahead" option. The default "Normal" option satisfies the performance requirements for a typical volume. The "Disabled" value implies no read ahead. The most efficient value for the controllers depends on your application. The "Aggressive" value is optimal for sequential access but it degrades random access.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Raid System Function
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function Physical Drives Raid System Function Background Task Priority SATA NCQ Hdd Power Management Support HDD Read Ethernet Configuration Ahead Cache Volume Data Read Ahead Volume View System Events Data Read Ahead Hdd Clear Event Buffer Queue Depth Setting Normal Enabled Empty Hardware Monitor HDD Slot LED Aggressive Controller Fan Detection System information Conservative Auto Activate Raid Set Disabled Disk Write Cache Mode Capacity Truncation ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
82
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Raid System Function
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function Physical Drives Raid System Function Background Task Priority SATA NCQ Hdd Power Management Support HDD Read Ethernet Configuration Ahead Cache Volume View System Events Data Read Ahead Hdd Queue Depth Setting Hdd Clear Event Buffer Queue Depth Setting Empty Hardware Monitor HDD Slot LED Controller Fan Detection System information Auto Activate Raid Set Disk Write Cache Mode Capacity Truncation
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function Physical Drives Raid System Function Background Task Priority SATA NCQ Empty Slot Led Hdd Power Management Support HDD Read Ethernet Configuration Ahead Cache ON ON Volume View System Events Data Read Ahead OFF Hdd Clear Event Buffer Queue Depth Setting Empty HDD Slot LED Empty Hardware Monitor HDD Slot LED Controller Fan Detection System information Auto Activate Raid Set Disk Write Cache Mode Capacity Truncation ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
83
BIOS CONFIGURATION
3.7.5.11 Controller Fan Detection
The ARC-1880ix series incorporate one big passive heatsink that allows the hot devices such as a ROC and expander chip to keep cool. In addition, newer systems already have enough air flow blowing over the controller. The big passive heat sink has provided enough adequate cooling for ROC and expander chip. The Controller Fan Detection function is available in the firmware for detecting the cooling fan function on the ROC which uses the active cooling fan on the ARC-1882i/x/LP/ixl-8/ixl-12 low profile board. When using the passive heatsink on the controller, disable the Controller Fan Detection function through this McBIOS RAID manager setting. The following screen shot shows how to change the McBIOS RAID manager setting to disable the warning beeper function. (This function is not available in the web browser setting.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Raid System Function
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function Physical Drives Raid System Function Background Task Priority SATA NCQ Hdd Power Management Support HDD Read Ethernet Configuration Ahead Cache Volume View System Events Data Read Ahead Hdd Clear Event Buffer Queue Depth Setting Empty Hardware Monitor HDD Slot LED Controller Fan Detection Controller Fan Dectection Controller Fan Detection System information Enabled Auto Activate Raid Set Disabled Disk Write Cache Mode Capacity Truncation ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
84
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Raid System Function
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function Physical Drives Raid System Function Background Task Priority SATA NCQ Hdd Power Management Support HDD Read Ethernet Configuration Ahead Cache Volume View System Events Data Read Ahead Activate Raid When Power on Auto Hdd Clear Event Buffer Queue Depth Setting Empty Hardware Monitor HDD Slot LED Disabled Controller Fan Detection System information Enabled Auto Activate Raid Set Activate Raid Set Auto Disk Write Cache Mode Capacity Truncation ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function Physical Drives Raid System Function Background Task Priority SATA NCQ Hdd Power Management Support HDD Read Ethernet Configuration Ahead Cache Volume View System Events Data Read Ahead Hdd Clear Event Buffer Queue Depth Setting Write Cache Mode Disk Empty Hardware Monitor HDD Slot LED Auto Controller Fan Detection Auto System information Disabled Auto Activate Raid Set Enabled Disk Write Cache Mode Capacity Truncation ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
85
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Raid System Function
Mute The Alert Beeper Quick Volume/Raid Setup Alert Raid Set Function Beeper Setting Change Volume Set Function Password JBOD/RAID Function Physical Drives Raid System Function Background Task Priority SATA NCQ Hdd Power Management Support HDD Read Ethernet Configuration Ahead Cache Truncate Disk Capacity Volume View System Events Data Read Ahead Hdd Clear Event Buffer Queue Depth Setting To Multiples of 10G Empty Hardware Monitor HDD Slot LED Multiples of 1G To Controller Fan DetectionDisabled System information Auto Activate Raid Set Disk Write Cache Mode Capacity Truncation Capacity Truncation ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Multiples Of 10G: If you have 120 GB drives from different vendors; chances are that the capacity varies slightly. For example, one drive might be 123.5 GB, and the other 120 GB. Multiples Of 10G truncates the number under tens. This makes the same capacity for both of these drives so that one could replace the other. Multiples Of 1G: If you have 123 GB drives from different vendors; chances are that the capacity varies slightly. For example, one drive might be 123.5 GB, and the other 123.4 GB. Multiples Of 1G truncates the fractional part. This makes the same capacity for both of these drives so that one could replace the other. Disabled: It does not truncate the capacity.
86
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function Hdd Power Management Ethernet Configuration View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
87
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Hdd Stagger Raid Set Function Power Management Power On Volume Set Function Power On Stagger Power On Stagger 0.4 Physical Drives Time To Low Power Idle 0.7 Raid System FunctionLow RPM Mode Time To 1.0 Hdd Power Management Down Hdd Time To Spin 1.5 Ethernet Configuration . View System Events . Clear Event Buffer 6.0 Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Low Power Idle Quick Volume/Raid Setup Hdd Disabled Raid Set Function Power Management Disabled Volume Set Function Power On 2 Stagger Physical Drives Time To Low Power Idle 3 Raid System FunctionLow RPM Mode4 Time To Hdd Power Management Down Hdd5 Time To Spin Ethernet Configuration 6 View System Events 7 Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
88
BIOS CONFIGURATION
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu
Low RPM Mode Quick Volume/Raid Setup Hdd Disabled Raid Set Function Power Management Disabled Volume Set Function Power On 10 Stagger Physical Drives Time To Low Power Idle 20 Raid System Time To Low RPM Mode30 Function Hdd Power Management Down Hdd40 Time To Spin Ethernet Configuration 50 View System Events 60 Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Spin Down Hdd Quick Volume/Raid Setup Hdd Disabled Raid Set Function Power Management Disabled Volume Set Function Power On 1 Stagger Physical Drives Time To Low Power Idle 3 Raid System FunctionLow RPM Mode5 Time To Hdd Power Management Down Hdd 10 Time To Spin Ethernet Configuration 15 View System Events 20 Clear Event Buffer 30 Hardware Monitor 40 System information 60
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
89
BIOS CONFIGURATION
3.7.7 Ethernet Configuration
Use this feature to set the controller Ethernet port configuration. It is not necessary to create reserved disk space on any hard disk for the Ethernet port and HTTP service to function; these functions are built into the controller firmware.move the cursor bar to the main menu Ethernet Configuration Function item and then press the Enter key. The Ethernet Configuration menu appears on the screen. Move the cursor bar to an item, then press Enter key to select the desired function.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function Hdd Power Management Ethernet Configuration View System Events Clear Event Buffer Hardware Monitor System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
90
BIOS CONFIGURATION
disable the DHCP function. If DHCP is disabled, it will be necessary to manually enter a static IP address that does not conflict with other devices on the network.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Volume Set Function Ethernet Configuration Physical Drives Raid System DHCP Function Function Select DHCP Setting DHCP Function Enable : : Enable Hdd Power Management Local IP Address : 192.168.001.100 Disabled Ethernet Configuration HTTP Port Number : 80 Enabled View System Events Port Number : 23 Telent Clear Event Buffer Port Number : 25 SMTP Hardware Monitor EtherNet Address : 00. 04. D9.7F .FF. FF System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Physical Drives DHCP Function : Enable Raid System Function IP Address Local 192.168.001.100 Local IP Address :: 192.168.001.100 Hdd Power Management Port Number : 80 HTTP Ethernet Configuration Telent Port Number : 23 Edit The Local IP Address View System Events SMTP Port Number : 25 Clear Event Buffer EtherNet Address : 00. 04. D9.7F .FF. FF 1 92.168.001.100 Hardware Monitor System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
91
BIOS CONFIGURATION
3.7.7.3 HTTP Port Number
To manually configure the HTTP Port Number of the controller, move the cursor bar to HTTP Port Number item, then press the Enter key to show the default address setting in the RAID controller. Then You can reassign the default HTTP Port Number of the controller.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Ethernet Configuration Volume Set Function Physical Drives DHCP Function : Enable Raid System Function IP Address Local : 192.168.001.100 Hdd Power Management Port Number : : 80 HTTP 80 HTTP Port Number Ethernet Configuration Ethernet Configuration Telnet Port Number : 23 The HTTP Port Number Edit View System Events SMTP Port Number : 25 Clear Event Buffer EtherNet Address : 00. 04. D9.7F .FF. FF 0 0080 Hardware Monitor System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Physical Drives DHCP Function : Enable Raid System Function IP Address Local : 192.168.001.100 Hdd Power Management Port Number : 80 HTTP Edit The Telent Port Number Ethernet Configuration Telnet Port Number Telent Port Number : : 23 View System Events SMTP Port Number : 25 0 0023 Clear Event Buffer EtherNet Address : 00. 04. D9.7F .FF. FF Hardware Monitor System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
92
BIOS CONFIGURATION
3.7.7.5 SMTP Port Number
To manually configure the SMTP Port Number of the controller, move the cursor bar to the main menu Ethernet Configuration function item and then press Enter key. The Ethernet Configuration menu appears on the screen. Move the cursor bar to SMTP Port Number item, then press Enter key to show the default address setting in the RAID controller. You can then reassign the default SMTP Port Number of the controller.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Ethernet Configuration Volume Set Function
Physical Drives DHCP Function : Enable Raid System Function Local IP Address : 192.168.001.100 Hdd Power Management Port Number : 80 HTTP Ethernet Configuration Telnet Port Number Edit The SMTP Port Number : 23 View System Events SMTP Port Number :: 25 Port Number 25 Clear Event Buffer EtherNet Address : 00. 04.0D9.7F .FF. FF 0025 Hardware Monitor System information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
93
BIOS CONFIGURATION
3.7.8 View System Events
To view the 6Gb/s SAS RAID controllers system events information, move the cursor bar to the main menu and select the View System Events link, then press the Enter key. The 6Gb/s SAS RAID controllers events screen appear. Choose this option to view the system events information: Timer, Device, Event type, Elapsed Time, and Errors. The RAID system does not have a build-in real time clock. The time information is the relative time from the 6Gb/s SAS RAID controller powered on.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Time Device Volume Set Function Physical Drives 2010-4-8 12:00:00 ARC-1880-VO#001 Raid System Function 2010-4-8 12:00:00 H/W Monitor Hdd Power Management 2010-4-8 12:00:00 H/W Monitor Ethernet Configuration 2010-4-7 12:00:00 RS232 Terminal View 12:00:00 H/W Events 2010-4-7System Function Monitor Clear 12:00:00 2010-4-7 Event BufferARC-1880-VO#001 Hardware Monitor System information Event Type ElapseTime Errors
Raid Powered On Raid Powered On Raid Powered On VT100 Log In Raid Powered On Start Initialize
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
94
BIOS CONFIGURATION
3.7.10 Hardware Monitor
To view the RAID controllers hardware monitor information, move the cursor bar to the main menu and click the Hardware Monitor link. The Controller H/W Monitor screen appears. The Controller H/W Monitor provides the CPU temperature, controller temperature and voltage of the 6Gb/s SAS RAID controller.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu
Controller H/W Monitor : 0 : 0 : 0 RPM : 0.000 : 0.000 : 0.000 : 1.840 V : 1.840 V : 1.840 V : 1.200 V : 0.896V
Quick Volume/Raid Setup CPU Temperature Raid Set Function Controller Temp. Volume Set Function CPU Fan Physical Drives 12V Raid System Function 5V Hdd Power Management 3.3V Ethernet Configuration DDR-ll +1.8V View System Events PCI-E +1.8V Clear Event Buffer CPU +1.8V Hardware Monitor CPU +1.2V System information DDR-ll +0.9
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Main : 800MHz PPC440 Quick Volume/Raid Setup Processor CPU ICache Size : 32KBytes Raid Set Function : 32KBytes/Write Back Volume Set Function CPU DCache Size System Memory : 512MB/800MHz/ECC Physical Drives : V1.48 2010-4-8 Raid System Function Firmware Version BOOT ROM Version : V1.48 2010-4-8 Hdd Power Management : 5.0.4.0 Ethernet Configuration PL Firmware Ver : ARC-1880_TSD003 View System Events Serial Number Unit Serial # : Clear Event Buffer Controller Name : ARC-1880 Hardware Monitor System Information Current IP Address : 192.168.0.55 information ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
95
DRIVER INSTALLATION
4. Driver Installation
This chapter describes how to install the 6Gb/s SAS RAID controller driver to your operating system. The installation procedures use the following terminology: Installing operating system on 6Gb/s SAS controllers volume If you have a new drive configuration without an operating system and want to install operating system on a disk drive managed by the 6Gb/s SAS RAID controller. The driver installation is a part of the operating system installation. Installing 6Gb/s SAS RAID controller into an existing operating system The computer has an existing operating system installed and the 6Gb/s SAS RAID controller is being installed as a secondary controller. Have all required system hardware and software components on hand before proceeding with the setup and installation. Materials required: Microsoft Windows 7/2008/Vista/XP/2003, Linux, FreeBSD, Solaris and Mac installation CD 6Gb/s SAS RAID controller software CD 6Gb/s SAS RAID controller
96
DRIVER INSTALLATION
system installations. For Windows 7/2008/Vista, you can copy the Windows driver file to USB device and installed from it. Determine the correct kernel version and identify which diskette images contain drivers for that kernel. If the driver file ends in .img, you can also create the appropriate driver diskette using dd utility. The following steps are required to create the driver diskettes: 1. The computer system BIOS must be set to boot-up from the CDROM. 2. Insert the ARC-1880 software driver CD disc into the CD-ROM drive. 3. The system will boot-up from CD-ROM Drive; to create the driver diskettes, select the SAS RAID Controller Driver Diskette Make Utility, and a screen with several choices will be displayed. 4. Move the highlight bar to the Create Driver Disk entry and press Enter. 5. The screen queries the ARC-1880 SAS RAID controllers support driver database and displays a list of available drivers. Move the highlight bar to the correct driver entry and press Enter key to select. 6. The next screen will show Please insert a formatted diskette into drive A:!! Press any key to continue. Insert the formatted diskette in drive A and press any key to continue. 7. The window will display the driver building message: Now is writing to Cylinder as it copies the image file from the CDROM to driver diskette. 8. The Write Complete !! message will display when the driver diskette ready. The driver diskette is made now. Proceed to the following instruction for installation procedures.
97
DRIVER INSTALLATION
4.2 Driver Installation for Windows
The 6Gb/s SAS RAID controller can be used with Microsoft Windows 7/2008/Vista/XP/2003. The 6Gb/s SAS RAID controllers support SCSI Miniport and StorPort Drivers for Windows 7/2008/ Vista/2003.
98
DRIVER INSTALLATION
volume set is created and configured, continue with next step to install the operating system. 3. Insert the Windows setup CD and reboot the system to begin the Windows installation.
Note:
The computer system BIOS must support bootable from CDROM. 4. Press F6 as soon as the Windows screen shows Setup is inspecting your computers hardware configuration. A message stating Press F6 to specify thrid-party RAID controller will display during this time. This must be done or else the Windows installer will not prompt for the driver from the 6Gb/s SAS RAID controller and the driver diskette will not be recognized. 5. The next screen will show: Setup could not determine the type of one or more mass storage device installed in your system. Selected specify additional SCSI controller by pressing S. 6. Window will prompt to place the Manufacturer-supplied hardware support disk into floppy drive A:. Insert the SAS RAID series driver diskette in drive A: and press Enter key. 7. Window will check the floppy; select the correct card and CPU type for your hardware from the listing and press Enter key to install it. 8. After Windows scans the hardware and finds the controller, it will display: Setup will load support for the following mass storage devices: ARECA[Windows X86-64 Storport] SATA/SAS PCI RAID Host Controller(RAID6-Engine Inside). Press Enter key to continue and copy the driver files. From this point on, simply follow the Microsoft Windows installation procedure. Follow the on-screen instructions, responding as needed, to complete the installation.
99
DRIVER INSTALLATION
9. After the installation is completed, reboot the system to load the new driver/operating system. 10. See Chapter 5 in this manual to customize your RAID volume sets using McRAID Storage Manager.
100
DRIVER INSTALLATION
1. Follow the instructions in Chapter 2, the Hardware Installation Chapter, to install the controller and connect the disk drives or enclosure. 2. Start the system and then press Tab+F6 to enter the controller McBIOS RAID manager. Use the configuration utility to create the RAID set and volume set. For details, see Chapter 3, McBIOS RAID Manager. Once a volume set is created and configured, continue with installation of the driver. 3. Re-Boot Windows and the OS will recognize the 6Gb/s SAS RAID controller and launche the Found New Hardware Wizard, which guides you in installing the SAS RAID driver. 4. The Upgrade Device Driver Wizard will pop-up and provide a choice of how to proceed. Choose Display a list of known drivers for this device, so that you can choose a specific driver. and click on Next. 5. When the next screen queries the user about utilizing the currently installed driver, click on the Have Disk button. 6. When the Install From Disk dialog appears, insert the 6Gb/s SAS RAID controller driver diskette or the shipping software CD and type-in or browse to the correct path for the Copy Manufacturers Files from: dialog box. 7. After specifying the driver location, the previous dialog box will appear showing the selected driver to be installed. Click the Next button. 8. The Digital Signature Not Found screen will appear. Click on Yes to continue the installation. 9. Windows automatically copies the appropriate driver files and rebuilds its driver database. 10. The Found New Hardware Wizard summary screen appears; click the Finish button.
101
DRIVER INSTALLATION
11. The System Settings Change dialog box appears. Remove the diskette from the drive and click Yes to restart the computer to load the new drivers. 12. See Chapter 5 in this manual for information on customizing your RAID volumes using McRAID Storage Manager.
102
DRIVER INSTALLATION
1. Ensure that you have closed all applications and are logged in with administrative rights. 2. Open Control Panel and start the Add/Remove Program icon and uninstall and software for the 6Gb/s SAS RAID controller. 3. Go to Control Panel and select System. Select the Hardware tab and then click the Device Manager button. In device manager, expand the SCSI and RAID Controllers section. Right click on the Areca 6Gb/s SAS RAID controller and select Uninstall. 4. Click Yes to confirm removing the SAS RAID driver. The prompt to restart the system will then be displayed.
103
DRIVER INSTALLATION
is the Linux driver source, which can be used to compile the updated version driver for RedHat, SuSE and other versions of Linux. Please refer to the readme.txt file on the included Areca CD or website to make driver diskette and to install driver to the system.
104
DRIVER INSTALLATION
4.6.1 Installation Procedures
This section describes detailed instructions for installing the Areca Mac driver & utility for the ARC-1880 series on your Apple Mac Pro. You must have administrative level permissions to install Areca Mac driver & utility. You can use the installer to install Areca Mac driver & utility (MRAID) at once or Custom to install special components. To follow the following process to install driver & utility on Apple Mac Pro as below: 1. Insert the Areca Mac Driver & Software CD that came with your Areca 6Gb/s SAS RAID controller. 2. Double-click on the install_mraid.zip file that resides at <CDROM>\packages\MacOS to add the installer on the Finder. 3. Launch the installer by double-clicking the install_mraid on the Finder. 4. Follow the installer on-screen steps, responding as needed, to complete the Areca driver and MRAID (ArcHTTP and CLI utility) installation.
Driver is required for the operating system to be able to interact with the Areca RAID controller. ArcHTTP has to be installed for GUI RAID console (MRAID storage manager) to run. It also runs as a service or daemon in the background that allows capturing of events for mail and SNMP traps notification. Refer to the section 5.6 Archttp Configuration, for details about the mail and SNMP traps configuration. Command Line Interface (CLI) lets you set up and manage RAID controller through a command line interface. Arc-cli per forms many tasks at the command line. You can download arc-cli manual from Areca website or software CD <CDROM>\ DOCS directory.
105
DRIVER INSTALLATION
5. A reboot is required to complete the installation (This will start the ArcHTTP so RAID Console can be used). 6. See chapter 5 in this manual for information on customizing your RAID volumes using McRAID storage manager. Normally archttp64 and arc_cli are installed at the same time on 6Gb/s SAS RAID controller. Once archttp and arc_cli have been installed, the background task automatically starts each time when you start your computer. There is one MARID icon showing on your desktop. This icon is for you to start up the McRAID stor age manager (by archttp) and arc_cli utility. You can also only upgrade the driver (using ArcMSR-x.x.dmg file), archttp or arc_ cli individual item that resides at <CD-ROM>\packages\MacOS directory.
106
The HTTP management software (Archttp) runs as a service or daemon, and have it automatically start the proxy for all controllers found. This way the controller can be managed remotely without having to sign in the server. The HTTP management software (Archttp) also has integrated the email notification and SNMP extension agent. The email notification can be configured in local or remote standard web browser.
Note:
If your controllers have onboard LAN port, you do not need to install ArcHttp proxy server, you can use McRAID Storage Manager directly.
107
Follow the on-screen prompts to complete ArcHttp proxy server software installation. A program bar appears that measures the progress of the ArcHttp proxy server setup. When this screen completes, you have completed the ArcHttp proxy server software setup. 4. After a successful installation, the Setup Complete dialog box is displayed.
108
109
110
111
General Configuration: Binding IP: Restrict ArcHttp proxy server to bind only single interface (If more than one physical network in the server). HTTP Port#: Value 1~65535 Display HTTP Connection Information To Console: Select Yes" to show Http send bytes and receive bytes information in the console. Scanning PCI Device: Select Yes for ARC-1XXX series controller Scanning RS-232 Device: No Scanning Inband Device: No
112
(1). SMTP Server Configuration: SMTP Server IP Address: Enter the SMTP server IP address which is not MCRAID manager IP. Ex: 192.168.0.2. (2). Mail Address Configurations: Sender Name: Enter the sender name that will be shown in the outgoing mail.Ex: RaidController_1Mail address: Enter the sender email that will be shown in the outgoing mail, but dont type IP to replace domain name. Ex: [email protected] Account: Enter the valid account if your SMTP mail server need authentication. Password: Enter the valid password if your SMTP mail server need authentication. MailTo Name: Enter the alert receiver name that will be shown in the outgoing mail. Mail Address: Enter the alert receiver mail address Ex: [email protected]
113
114
Note:
Event Notification Table refer to Appendix D. After you confirm and submit configurations, you can use "Generate Test Event" feature to make sure these settings are correct.
115
116
The Enter Network Password dialog screen appears, type the User Name and Password. The RAID controller default User Name is admin and the Password is 0000. After entering the user name and password, press Enter key to access the McRAID storage manager.
117
118
Note:
You can find controller Ethernet port IP address in McBIOS RAID manager System Information option.
To display RAID set information, move the mouse cursor to the desired RAID set number, then click it. The RAID set information will appear. To display volume set information, move the mouse cursor to the desired volume set number, then click it. The volume set Information will display. To display drive information, move the mouse cursor to the desired physical drive number, then click it. The drive information will display.
119
The number of physical drives in the 6Gb/s SAS RAID controller determines the Raid Levels that can be implemented with the RAID set. You can create a RAID set associated with exactly one volume set. The user can change the Raid Level, Capacity, Initialization Mode and Stripe Size. A hot spare option is also created, depending on the exist configuration. Click the Confirm The Operation check box and click on the Submit button in the Quick Create screen, the RAID set and volume set will start to initialize.
120
121
Note:
To create RAID 30/50/60 volume, you need create multiple RAID sets first (up to 8 RAID sets) with the same disk members on each RAID set. The max no. disk drives per RAID set: 32 for RAID 0/1/10(1E)/3/50/60 and 128 for RAID 30/50/60.
122
Press the Yes to start the expansion on the RAID set. The new additional capacity can be utilized by one or more volume sets. The volume sets associated with this RAID set appear for you to have chance to modify RAID level or stripe size. Follow the instruction presented in the Modify Volume Set to modify the volume sets; operation system specific utilities may be required to expand operating system partitions.
Note:
1. Once the Expand Raid Set process has started, user can not stop it. The process must be completed. 2. If a disk drive fails during RAID set expansion and a hot spare is available, an auto rebuild operation will occur after the RAID set expansion completes. 3. RAID 30/50/60 does not support the "Expand Raid set". 4. RAID set expansion is a quite critical process, we strongly recommend customer backup data before expand. Unexpected accident may cause serious data corruption.
123
124
125
Note:
Please contact us to make sure if you need to use rescue function. Improperly usage may cause configuration corruption.
126
Volume Name
The default volume name will always appear as ARC-1880VOL. You can rename the volume set providing it does not exceed the 15 characters limit.
127
Capacity
The maximum volume size is the default initial setting. Enter the appropriate volume size to fit your application.
If volume capacity will exceed 2TB, controller will show the "Greater Two TB Volume Support" sub-menu. Greater Two TB Volume Support option: "No", "64bit LBA" and "4K Block".
No It keeps the volume size with max. 2TB limitation. 64bit LBA This option uses 16 bytes CDB instead of 10 bytes. The maximum volume capacity up to 512TB. This option works on different OS which supports 16 bytes CDB. Such as: Windows 2003 with SP1 or later Linux kernel 2.6.x or later 4K Block It change the sector size from default 512 bytes to 4k bytes. the maximum volume capacity up to 16TB. This option works under Windows platform only. And it can not be converted to Dynamic Disk, because 4k sector size is not a standard format. For more details please download PDF file from ftp:// ftp.areca.com.tw/RaidCards/Documents/Manual_Spec/ Over2TB_050721.zip Initialization Mode Press Enter key to define Background Initialization, Foreground Initialization or No Init (To Rescue Volume). When Background Initialization, the initialization proceeds as a background task, the volume set is fully accessible for system
128
Stripe Size
This parameter sets the size of the stripe written to each disk in a RAID 0, 1, 10, 5, 6, 50 or 60 logical drive. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512KB or 1M. A larger stripe size produces better read performance, especially if your computer does mostly sequential reads. However, if you are sure that your computer does random reads more often, select a smaller stripe size.
Note:
RAID level 3 cant modify the cache strip size.
Cache Mode
The 6Gb/s SAS RAID controller supports Write Through and Write Back cache. Tagged Command Queuing The Enabled option is useful for enhancing overall system performance under multi-tasking operating systems. The Command Tag (Drive Channel) function controls the SAS command tag queuing support for each drive channel. This function should normally remain Enabled. Disabled this function only when using SAS drives that do not support command tag queuing.
SCSI Channel/SCSI ID/SCSI Lun SCSI Channel: The 6Gb/s SAS RAID controller function is simulated as a external SCSI RAID controller. The host bus is represented as a SCSI channel. Choose the SCSI Channel. SCSI ID: Each SCSI device attached to the SCSI card, as well as the card itself, must be assigned a unique SCSI ID number.
129
Note:
RAID level 30 50 and 60 can support up to eight RAID set (four pairs), but it can not support expansion and migration.
130
131
132
Note:
1. If the volume is RAID level 30, 50, or 60, you can not change the volume to another RAID level. If the volume is RAID level 0, 1, 10(1E), 3, 5, or 6, you can not change the volume to RAID level 30, 50, or 60. 2.Power failure may damage the migration data. Please backup the RAID data before you start the migration function.
133
Note:
Please make sure of the inconsistency source generated by parity error or bad block before you click the recovery method. Otherwise, you will lose the recovery data.
134
135
136
137
138
139
140
141
142
143
144
145
146
Note:
If you configure the HTTP Port Number to 0, the HTTP console will be closed.
147
Note:
NTP feature works through onboard Ethernet port. So you must make sure that you have connected onboard Ethernet port.
148
149
150
151
6.9 Information
6.9.1 Raid Set Hierarchy
Use this feature to view the 6Gb/s SAS RAID controller current RAID set, current volume set and physical disk information. The volume state and capacity are also shown in this screen.
152
153
154
APPENDIX
Appendix A
Upgrading Flash ROM Update Process
A-1 Overview Since the PCIe 2.0 6Gb/s SAS RAID controller features flash ROM firmware, it is not necessary to change the hardware flash chip in order to upgrade the RAID firmware. The user can simply re-program the old firmware through the In-Band PCIe 2.0 bus or Out-ofBand Ethernet port McRAID Storage manager and nflash DOS utility. New releases of the firmware are available in the form of a DOS file on the shipped CD or Areca website. The files available at the FTP site for each model contain the following files in each version: ARC1880NNNN.BIN Software Binary Code ( NNNN refers to the software code type) ARC1880BIOS.BIN : PCIe 2.0 BIOS for system board using ARC1880BOOT.BIN : RAID controller hardware initialization ARC1880FIRM.BIN : RAID kernel program ARC1880MBR0.BIN: Master Boot Record for supporting Dual Flash Image in the 6Gb/s SAS RAID controller README.TXT contains the history information of the software code change in the main directory. Read this file first to make sure you are upgrading to the proper binary file. Select the right file for the upgrade. Normally, user upgrades the ARC1880BIOS.BIN for system M/B compatibility and ARC1880FIRM.BIN for RAID function upgrades.
Note:
Please update all binary code (BIOS, BOOT and FIRM) before you reboot system. Otherwise, a mixed firmware package may hang the controller.
155
APPENDIX
A-2 Upgrading Firmware Through McRAID Storage Manager Get the new version firmware for your 6Gb/s SAS RAID controller. For example, download the bin file from your OEMs web site onto the C: drive.
1. To upgrade the 6Gb/s SAS RAID controller firmware, move the mouse cursor to Upgrade Firmware link. The Upgrade The Raid System Firmware or Boot Rom screen appears. 2. Click "Browse". Look in the location to which the Firmware upgrade software was downloaded. Select the file name and click Open. 3. Click Confirm The Operation and press the Submit button. 4. The web browser begins to download the firmware binary to the controller and start to update the flash ROM. 5. After the firmware upgrade is complete, a bar indicator will show Firmware Has Been Updated Successfully. 6. After the new firmware completes downloading, find a chance to restart the controller/computer for the new firmware to take effect. The web browser-based McRAID storage manager can be accessed through the In-Band PCIe bus or Out-of-Band LAN port. The InBand method uses the ArcHttp proxy server to launch the McRAID storage manager. The Out-of-Band method allows local or remote to access the McRAID storage manager from any standard internet
156
APPENDIX
browser via a LAN or WAN with no software or patches required. Controller with onboard LAN port, you can directly plug an Ethernet cable to the controller LAN port, then enter the McBIOS RAID manager to configure the network setting. After network setting configured and saved, you can find the current IP address in the McBIOS RAID manager "System Information" page. From a remote pc, you can directly open a web browser and enter the IP address. Then enter user name and password to login and start your management. You can find the firmware update feature from the "Raid system Console" on the "System Controls" option. A-3 Upgrading Firmware Through nflash DOS Utility Areca now offers an alternative means communication for the 6Gb/s SAS RAID controller Upgrade the all files (BIOS, BOOT, FIRM and MBR0) without necessary system starting up to running the ArcHttp proxy server. The nflash utility program is a DOS application, which runs in the DOS operating system. Be sure of ensuring properly to communicate between 6Gb/s SAS RAID controller and nflash DOS utility. Please make a bootable DOS floppy diskette or UBS devices from other Windows operating system and boot up the system from those bootable devices. Starting the nflash Utility You do not need to short any jumper cap on running nflash utility. The nflash utility provides an on-line table of contents, brief descriptions of the help sub-commands. The nflash utility put on the <CD-ROM>\Firmware directory. You can run the <nflash> to get more detailed information about the command usage. Typical output looks as below:
157
APPENDIX
A:\nflash Raid Controller Flash Utility V1.11 2007-11-8 Command Usage: NFLASH FileName NFLASH FileName /cn --> n=0,1,2,3 write binary to controller#0 FileName May Be ARC1880FIRM.BIN or ARC1880* For ARC1880* Will Expand To ARC1880BOOT /FIRM/BIOS.BIN A:\>nflash arc188~1.bin Raid Controller Flash Utility V1.11 2007-11-8 MODEL : ARC-1880 MEM FE620000 FE7FF000 File ARC188~1.BIN : >>*** => Flash 0K
A-4 Upgrading Firmware Through CLI This Command Line Interface (CLI) provides you to configure and manage the 6Gb/s SAS RAID controller components in Windows, Linux, FreeBSD and more environments. The CLI is useful in environments where a graphical user interface (GUI) is not available. Through the CLI, you perform firmware upgrade that you can perform with the McRAID storage manager GUI. The controller has added protocol on the firmware for user to update the controller firmware package(BIOS, BOOT, FIRM and MBRO) through the utility. To update the controller firmware, follow the procedure below:
Parameter:<path=<PATH_OF_FIRMWARE_FILE>> Fn: Firmware Updating. Ex: Update Firmware And File Path Is In [C:\FW\ARC1880FIRM.BIN.] Command: sys updatefw path=c:\fw\arc1880firm.bin [Enter]
158
APPENDIX
Appendix B
Battery Backup Module (ARC-6120BAT113)
B-1 Overview The 6Gb/s SAS RAID controller operates using cache memory. The Battery Backup Module is an add-on module that provides power to the 6Gb/s SAS RAID controller cache memory in the event of a power failure. The Battery Backup Module monitors the write back cache on the 6Gb/s SAS RAID controller, and provides power to the cache memory if it contains data not yet written to the hard drives when power failure occurs. B-2 BBM Components This section provides the board layout and connector/jumper for the BBM.
B-3 Status of BBM D13 (Green) : light when BBM activated D14 (Red) : light when BBM charging D15 (Green) : light when BBM normal
Note:
The BBM status will be shown on the web browser of "Hardware Monitor Information" screen.
159
APPENDIX
B-4 Installation 1. Make sure all power to the system is disconnected. 2. Connector J2 is available for the optional battery backup module. Connect the BBM cable to the 12-pin battery connector on the controller. 3. Integrators may provide pre-drilled holes in their cabinet for securing the BBM using its three mounting positions. 4. Low profile bracket also provided. 5. The BBM will occupy one PCI slot on the host backplane.
B-5 Battery Backup Capacity Battery backup capacity is defined as the maximum duration of a power failure for which data in the cache memory can be maintained by the battery. The BBMs backup capacity varied with the memory chips that installed on the 6Gb/s SAS RAID controller. B-6 Operation 1. Battery conditioning is automatic. There are no manual procedures for battery conditioning or preconditioning to be performed by the user.
160
APPENDIX
2. In order to make sure of all the capacity is available for your battery cells, allow the battery cell to be fully charged when installed for the first time. The first time charge of a battery cell takes about 24 hours to complete. B-7 Changing the Battery Backup Module At some point, the LI-ION battery will no longer accept a charge properly. LI-ION battery life expectancy is anywhere from approximately 1 to 5 years. 1. Shutdown the operating system properly. Make sure that cache memory has been flushed. 2. Disconnect the BBM cable from J2 on the 6Gb/s SAS RAID controller. 3. Disconnect the battery pack cable from JP2 on the BBM. 4. Install a new battery pack and connect the new battery pack to JP2. 5. Connect the BBM to J2 on the 6Gb/s SAS RAID controller. 6. Disable the write-back function from the McBIOS RAID manager or McRAID storage manager.
Note:
Do not remove BBM while system is running. B-8 Battery Functionality Test Procedure: 1. Writing amount of data into controller volume, about 5GB or bigger. 2. Waiting for few seconds, power failed system by remove the power cable. 3. Check the battery status, make sure the D13 is bright light, and battery beeps every few seconds. 4. Power on system, and press Tab/F6 to login controller. 5. Check the controller event log, make sure the event shows controller boot up with power recovered. B-9 BBM Specifications Mechanical Module Dimension (W x H x D): 37.3 x 13 x 81.6 mm
161
APPENDIX
BBM Connector: 2 x 6 box header Environmental Operating Temperature Temperature: -0O C to +40O C Humidity: 45-85%, non-condensing Storage Temperature Temperature: -40O C to 60O C Humidity: 45-85%, non-condensing Electrical Input Voltage +3.6VDC On Board Battery Capacity 1880mAH (1 x 1880mAH) for ARC-1880 series board
162
APPENDIX
Appendix C
SNMP Operation & Installation
C-1 Overview The McRAID storage manager includes a firmware-embedded Simple Network Management Protocol (SNMP) agent and SNMP Extension Agent for the Areca RAID controller. An SNMP-based management application (also known as an SNMP manager) can monitor the disk array. An example of a SNMP management application is Hewlett-Packards Open View, Net-SNMP or SNMPc. The SNMP extension agent can be used to augment the Areca RAID controller if you are already running an SNMP management application at your site. C-2 SNMP Definition SNMP, an IP-based protocol, has a set of commands for getting the status of target devices. The SNMP management platform is called the SNMP manager, and the managed devices have the SNMP agent loaded. Management data is organized in a hierarchical data structure called the Management Information Base (MIB). These MIBs are defined and sanctioned by various industry associations. Each type of device on your network has its own specific MIB file. The MIB file defines the device as a set of managed objects values that can be read or changed by the SNMP manager. The MIB file enables the SNMP manager to interpret trap messages from devices. To make sense out of a trap thats sent by a device, the SNMP manager needs to have access to the MIB that describes the format and content of the possible traps that the device can send. The objective is for all vendors to create products in compliance with these MIBs so that inter-vendor interoperability can be achieved.To be available for the SNMP manager, a command adds the MIB file for each of devices to the MIB database. This enables the devices to be managed via the SNMP manager. The following figure illustrates the various components of an SNMPbased management architecture.
163
APPENDIX
Manager Application
Physical Managed Object C-3 SNMP Installation Perform the following steps to install the Areca RAID controller SNMP function into the SNMP manager. The installation of the SNMP manager is accomplished in several phases: Step 1. Installing the SNMP manager software on the client Installing the SNMP manager software on the client. This installation process is well-covered in the Users Guide of your SNMP manager application. Step 2. Compiling the MIB description file with the management Placing a copy of the RAID controllers MIBs file in a directory which is accessible to the management application and compile the MIB description file with the SNMP management applcation database. Before the manager application accesses the Areca RAID controller, it is necessary to integrate the MIB into the management applications database of events and status indicator codes. This process is known as compiling the MIB into the application. This process is highly vendor-specific and should be well-covered in the Users Guide of your SNMP manager application.Ensure the compilation process successfully integrates the contents of the areca_sas.mib file into the traps database. The MIBs file resides at: <CD-ROM>\ packages\SNMP_MIBs on the software CD or download from http:// www.areca.com.tw.
164
APPENDIX
Each RAID controller needs to have its own MIBs file. Areca provide 4 adapters MIBs file for users. User can request it if more controllers install on one system.
Note:
1.The MIB compiler may be not installed by default with SNMP manager. 2. Some SNMP managers have unique rule on the format of MIB files, you may need to refer the error message to modify the mib file to be able to met the software requirement. Step 3. SNMP Service Method With Areca series RAID cards, there are 3 service methods to get snmp: ArcHttp, Onboard NIC and in-band PCIe + SNMP extension agent. (1). Service Method-1: using ArcHttp Proxy Server Pay attention to these: Do Not check mark the option: SNMP Through PCI. Make sure you have the latest driver and ArcHttp, from this URL http://www.areca.com.tw/support/ ArcHttp supports sending traps only, do not support the get command. (2). Service Method-2: using Onboard NIC. Pay attention to these: Do Not check mark the option: SNMP Through PCI. Do need to fill out the SNMP Trap Config. (3). Service Method-3: using In-band PCI + SNMP extension agent. Pay attention to these: Download the snmp extension agent from Areca URL. The Agent is to be installed on the system which has the Areca card. Check Mark the option: SNMP Through PCI. To use In-Band PCIe host bus interface, keep space (or zero) on all SNMP Tarp IP Address options.
165
APPENDIX
C-3-1 Using ArcHttp The HTTP management software (Archttp) runs as a service or daemon, and have it automatically start the proxy for all controllers found. This way the controller can be managed remotely without having to sign in the server. The HTTP management software (Archttp) also has integrated the ability of sending SNMP trap. Please reference the manual ArcHttp Proxy Dervice Installation section to install it. The ArcHttp proxy server will automatically assign one additional port for setup its configuration. If you want to change the archttpsrv.conf setting up of ArcHttp proxy server configuration, for example: General Configuration, Mail Configuration, and SNMP Configuration, please start Web Browser http:\\localhost: Cfg Assistant. Such as http:\\localhost: 82. The port number for ArcHttp proxy server configuration is McRAID storage manager port number plus 1. SNMP Traps Configuration: To enable the controller to send the SNMP traps to client SNMP manager using the IP address assigned to the operating system, such as Net-SNMP manager, you can simply use the SNMP function on the ArcHttp proxy server software. To enable the RAID controller SNMP traps sending function, click on the SNMP Configuration link. The Archttp proxy only provide one direction to send the trap to the SNMP manager without needing to install the SNMP extension agent on the host. If SNMP manager requests to query the SNMP information from RAID controller, please refer the 1.3.2 section Service Method-2: using Onboard NIC and 1.3.3 section Service Method-3: using In-band PCI + SNMP extension agent. The SNMP Traps Configurations menu will show as following:
166
APPENDIX
(1). SNMP Trap Configurations Enter the SNMP trap IP address. (2). SNMP System Configurations Community name acts as a password to screen accesses to the SNMP agent of a particular network device. Type the community names of the SNMP agent in this field. Before access is granted to a request station, this station must incorporate a valid community name into its request; otherwise, the SNMP agent will deny access to the system. Most network devices use public as default of their community names. This value is case-sensitive. (3). SNMP Trap Notification Configurations Before the client side SNMP manager application accepts the Areca RAID controller traps, it is necessary to integrate the MIB into the management applications database of events and status indicator codes. This process is known as compiling the MIB into the application. This process is highly vendor-specific and should be well-covered in the Users Guide of your SNMP application. Ensure the compilation process successfully integrates the contents of the areca_sas.mib file into the traps database.The MIBs file resides at: <CD-ROM>\packages\SNMP_MIBs on the software CD.
Note:
Event Notification Table refer to Chapter 2. After you confirm and submit configurations, you can use "Generate Test Event" feature to make sure these settings are correct. C-3-2 Using Onboard NIC Installation By using the built-in LAN port on the RAID controller- RAID controller using built-in LAN interface. You can use the browser-based manager or CLI SNMP configuration to setup the firmware-based SNMP configuration. The following screen is the firmware-embedded SNMP configuration setup screen using browser-based manager:
167
APPENDIX
To launch the above browser-based RAID controller SNMP function, click on the System Controls link. The System Controls menu will show available items. Select the SNMP Configuration item.The firmware-embedded SNMP agent manager monitors all system events and the SNMP function becomes functional with no agent software required.
(1). SNMP Trap Configurations Enter the SNMP Trap IP Address. (2). SNMP System Configurations Community name acts as a password to screen accesses to the SNMP agent of a particular network device. Type in the community names of the SNMP agent. Before access is granted to a request station, this station must incorporate a valid community name into its request; otherwise, the SNMP agent will deny access to the system. Most network devices use public as default of their community names. This value is case-sensitive. The system Contact, Name and Location that will be shown in the outgoing SNMP trap. (3). SNMP Trap Notification Configurations Please refer to Chapter 2 of Event Notification Configurations.
168
APPENDIX
C-3-3 Using In-band PCI + SNMP extension agent Installation By using the IP address assigned to the operating- RAID controller using Areca SNMP extension agent through PCIe host bus interface. a). Set only Community field and select the SNMP Port option on the firmware-embedded SNMP configuration function. There is no function to set other fields on SNMP System Configuration. The SNMP community and SNMP port can setup by using browser-based manager or CLI SNMP configuration. To launch the above browser-based RAID controller SNMP function, click on the System Controls link. The System Controls menu will show available items. Select the SNMP Configuration item. The following SNMP System Configuration screen is launched by browserbased manager. About community, Community name acts as a password to screen accesses to the SNMP agent of a particular network device. Type in the community names of the SNMP agent. Before access is granted to a request station, this station must incorporate a valid community name into its request; otherwise, the SNMP agent will deny access to the system. Most network devices use public as default of their community names. This value is case-sensitive.
169
APPENDIX
b). Mark the check box on the SNMP Through PCI Inband setting and keep space (or zero) on all SNMP Tarp IP Address options. c). Installing the SNMP extension agent on the server Please refer to next section of SNMP Extension Agent Installation for different operation system such as Windows, Linux and FreeBSD. C-3-4 SNMP Extension Agent Installation The SNMP extension agent on the device is able to return meaningful, highly useful information to the SNMP manager. The Areca RAID controllers have supported the extension agent for Windows, Linux and FreeBSD. This section is the detail procedures for those extension agent installation. C-3-4-1 Windows You must have administrative level permission to install 6Gb/s SAS RAID controller extension agent software. This procedure assumes that the RAID hardware and Windows are both installed and operational in your system. To enable the SNMP agent for Windows, configure Windows for TCP/IP and SNMP services. The Areca SNMP extension agent file is ARCSNMP.DLL. Screen captures in this section are taken from a Windows XP installation. If you are running another version of Windows, your screens may look different, but the Areca SNMP extension agent installation is essentially the same. 1. Insert the Areca RAID controller software CD in the CD-ROM drive. 2. Run the setup.exe file that resides at: <CD-ROM>\packages\ windows\SNMP\setup.exe on the CD. (If SNMP service was not installed, please install SNMP service first.)
170
APPENDIX
4. Click the Next button and then the Ready Install the Program screen will appear. Follow the on-screen prompts to complete Areca SNMP extension agent installation.
5. A Progress bar appears that measures the progress of the Areca SNMP extension agent setup. When this screen completes, you have completed the Areca SNMP extension agent setup.
171
APPENDIX
6. After a successful installation, the Setup Complete dialog box of the installation program is displayed. Click the Finish button to complete the installation.
Starting SNMP Trap Notification Configurations To start "SNMP Trap Notification Configruations", There have two methods. First, double-click on the "Areca RAID Controller".
Second, you may also use the "Taskbar Start/programs/Areca Technology Corp/ArcSnmpConf" menus shown below.
172
APPENDIX
SNMP Community Configurations About community, Community name acts as a password to screen accesses to the SNMP agent of a particular network device. Type in the community names of the SNMP agent. Before access is granted to a request station, this station must incorporate a valid community name into its request; otherwise, the SNMP agent will deny access to the system. Most network devices use public as default of their community names. This value is case-sensitive. SNMP Trap Notification Configurations The "Community Name" should be the same as firmwareembedded SNMP Community. The "SNMP Trap Notification Configurations" includes level 1: Serious, level 2: Error, level 3: Warning and level 4: Information. The level 4 covers notification events such as initialization of the controller and initiation of the rebuilding process; Level 3 includes events which require the issuance of warning messages; Level 2 covers notification events which once have happen; Level 1 is the highest level, and covers events the need immediate attention (and action) from the administrator.
C-3-4-2 Linux You must have administrative level permission to install Areca RAID software. This procedure assumes that the Areca RAID hardware and Linux are installed and operational in your system. The old version agent has to modify the open source project, integrate the changes from Areca manually, then take the modified binaries and manually deploy them. Users need to change source code from the linux distribution and then maintain it by themselves.
173
APPENDIX
The new version agent provides the way to integrate with those codes into snmpd/snmptrapd and create a sub agent for user easy to install it. The new version SNMP extension agent installation for Linux procedure, please refer to <CD-ROM>\packages\Linux\SNMP\readme.txt or download from ftp://ftp.areca. com.tw/RaidCards/AP_Drivers/Linux/SNMP/V4.1/ . C-3-4-3 FreeBSD You must have administrative level permission to install Areca RAID software. This procedure assumes that the Areca RAID hardware and FreeBSD are installed and operational in your system. The old version agent has to modify the open source project, integrate the changes from Areca manually, then take the modified binaries and manually deploy them. Users need to change source code from the linux distribution and then maintain it by themselves. The new version agent provides the way to integrate with those codes into snmpd/snmptrapd and create a sub agent for user easy to install it. The new version SNMP extension agent installation for FreeBSD procedure, please refer to <CD-ROM>\packages\FreeBSD\SNMP\readme.txt or download from ftp://ftp. areca.com.tw/RaidCards/AP_Drivers/FreeBSD/SNMP/V4.1/ .
174
APPENDIX
Appendix D
Event Notification Configurations
The controller classifies disk array events into four levels depending on their severity. These include level 1: Urgent, level 2: Serious, level 3: Warning and level 4: Information. The level 4 covers notification events such as initialization of the controller and initiation of the rebuilding process; Level 2 covers notification events which once have happen; Level 3 includes events which require the issuance of warning messages; Level 1 is the highest level, and covers events that need immediate attention (and action) from the administrator. The following lists sample events for each level: A. Device Event
Event Device Inserted Device Removed Reading Error Level Warning Warning Warning Meaning HDD inserted HDD removed HDD reading error Keep Watching HDD status, may be it caused by noise or HDD unstable. Keep Watching HDD status, may be it caused by noise or HDD unstable. Keep Watching HDD status, may be it caused by noise or HDD unstable. Check HDD connection Keep Watching HDD status, may be it caused by noise or HDD unstable. Replace HDD If only happen once, it may be caused by noise. If always happen, please check power supply or contact to us. Replace HDD Action
Writing Error
Warning
Warning
Warning Warning
Urgent Serious
Device Failed(SMART)
Urgent
175
APPENDIX
PassThrough Disk Created PassThrough Disk Modified PassThrough Disk Deleted Inform Inform Inform Pass Through Disk created Pass Through Disk modified Pass Through Disk deleted
B. Volume Event
Event Start Initialize Start Rebuilding Start Migrating Start Checking Complete Init Complete Rebuild Level Warning Warning Warning Warning Warning Warning Meaning Volume initialization has started Volume rebuilding has started Volume migration has started Volume parity checking has started Volume initialization completed Volume rebuilding completed Volume migration completed Volume parity checking completed New volume created Volume deleted Volume modified Volume degraded Volume failure Failed volume revived Initialization been abort Rebuilding aborted Migration aborted Parity check aborted Initialization stopped Rebuilding stopped Migration stopped Parity check stopped Replace HDD Action
Complete Migrate Warning Complete Check Create Volume Delete Volume Modify Volume Volume Degraded Volume Failed Failed Volume Revived Abort Initialization Abort Rebuilding Abort Migration Abort Checking Stop Initialization Stop Rebuilding Stop Migration Stop Checking Warning Warning Warning Warning Urgent Urgent Urgent Warning Warning Warning Warning Warning Warning Warning Warning
176
APPENDIX
C. RAID Set Event
Event Create RaidSet Delete RaidSet Expand RaidSet Rebuild RaidSet RaidSet Degraded Level Warning Warning Warning Warning Urgent Meaning New RAID set created Raidset deleted Raidset expanded Raidset rebuilding Raidset degraded Replace HDD Action
Urgent
Fan Failed
Urgent
Check cooling fan of the enclosure and replace with a new one if required.
Controller Temp. Recovered Hdd Temp. Recovered Raid Powered On Test Event Power On With Battery Backup Incomplete RAID Discovered HTTP Log In
Serious
RAID power on Test event RAID power on with battery backuped Some RAID set member disks missing before power on a HTTP login detected Check disk information to find out which channel missing.
Serious
177
APPENDIX
Telnet Log InVT100 Log In API Log In Lost Rebuilding/ MigrationLBA Serious Serious Serious Urgent a Telnet login detected a VT100 login detected a API login detected Some rebuilding/ migration raidset member disks missing before power on. Reinserted the missing member disk back, controller will continued the incompleted rebuilding/migration.
178
APPENDIX
Appendix E
RAID Concept
RAID Set
A RAID set is a group of disks connected to a RAID controller. A RAID set contains one or more volume sets. The RAID set itself does not define the RAID level (0, 1, 1E, 3, 5, 6, 10, 30, 50 60, etc); the RAID level is defined within each volume set. Therefore, volume sets are contained within RAID sets and RAID Level is defined within the volume set. If physical disks of different capacities are grouped together in a RAID set, then the capacity of the smallest disk will become the effective capacity of all the disks in the RAID set.
Volume Set
Each volume set is seen by the host system as a single logical device (in other words, a single large virtual hard disk). A volume set will use a specific RAID level, which will require one or more physical disks (depending on the RAID level used). RAID level refers to the level of performance and data protection of a volume set. The capacity of a volume set can consume all or a portion of the available disk capacity in a RAID set. Multiple volume sets can exist in a RAID set. For the RAID controller, a volume set must be created either on an existing RAID set or on a group of available individual disks (disks that are about to become part of a RAID set). If there are pre-existing RAID sets with available capacity and enough disks for the desired RAID level, then the volume set can be created in the existing RAID set of the users choice.
179
APPENDIX
In the illustration, volume 1 can be assigned a RAID level 5 of operation while volume 0 might be assigned a RAID level 1E of operation. Alternatively, the free space can be used to create volume 2, which could then be set to use RAID level 5.
180
APPENDIX
on the existing volume sets (residing on the newly expanded RAID set) is redistributed evenly across all the disks. A contiguous block of unused capacity is made available on the RAID set. The unused capacity can be used to create additional volume sets. A disk, to be added to a RAID set, must be in normal mode (not failed), free (not spare, in a RAID set, or passed through to host) and must have at least the same capacity as the smallest disk capacity already in the RAID set. Capacity expansion is only permitted to proceed if all volumes on the RAID set are in the normal status. During the expansion process, the volume sets being expanded can be accessed by the host system. In addition, the volume sets with RAID level 1, 10, 3, 5 or 6 are protected against data loss in the event of disk failure(s). In the case of disk failure, the volume set changes from migrating state to migrating+degraded state. When the expansion is completed, the volume set would then transition to degraded mode. If a global hot spare is present, then it further changes to the rebuilding state. The expansion process is illustrated as following figure.
RAID controller redistributes the original volume set over the original and newly added disks, using the same fault-tolerance configuration. The unused capacity on the expand RAID set can then be used to create an additional volume set, with a different fault tolerance setting (if required by the user.)
181
APPENDIX
Online RAID Level and Stripe Size Migration For those who wish to later upgrade to any RAID capabilities, a system with online RAID level/stripe size migration allows a simplified upgrade to any supported RAID level without having to reinstall the operating system. The RAID controllers can migrate both the RAID level and stripe size of an existing volume set, while the server is online and the volume set is in use. Online RAID level/stripe size migration can prove helpful during performance tuning activities as well as when additional physical disks are added to the RAID controller. For example, in a system using two drives in RAID level 1, it is possible to add a single drive and add capacity and retain fault tolerance. (Normally, expanding a RAID level 1 array would require the addition of two disks). A third disk can be added to the existing RAID logical drive and the volume set can then be migrated from RAID level 1 to 5. The result would be parity fault tolerance and double the available capacity without taking the system down. A forth disk could be added to migrate to RAID level 6. It is only possible to migrate to a higher RAID level by adding a disk; disks in an existing array cant be reconfigured for a higher RAID level without adding a disk. Online migration is only permitted to begin, if all volumes to be migrated are in the normal mode. During the migration process, the volume sets being migrated are accessed by the host system. In addition, the volume sets with RAID level 1, 1E, 10, 3, 5 or 6 are protected against data loss in the event of disk failure(s). In the case of disk failure, the volume set transitions from migrating state to (migrating+degraded) state. When the
182
APPENDIX
migration is completed, the volume set transitions to degraded mode. If a global hot spare is present, then it further transitions to rebuilding state. Online Volume Expansion Performing a volume expansion on the controller is the process of growing only the size of the latest volume. A more flexible option is for the array to concatenate an additional drive into the RAID set and then expand the volumes on the fly. This happens transparently while the volumes are online, but, at the end of the process, the operating system will detect free space at after the existing volume. Windows, NetWare and other advanced operating systems support volume expansion, which enables you to incorporate the additional free space within the volume into the operating system partition. The operating system partition is extended to incorporate the free space so it can be used by the operating system without creating a new operating system partition. You can use the Diskpart.exe command line utility, included with Windows Server 2003 or the Windows 2000 Resource Kit, to extend an existing partition into free space in the dynamic disk. Third-party software vendors have created utilities that can be used to repartition disks without data loss. Most of these utilities work offline. Partition Magic is one such utility.
High availability
A hot spare is an unused online available drive, which is ready for replacing the failure disk. The hot spare is one of the most important features that RAID controllers provide to deliver a high degree of fault-tolerance. A hot spare is a spare physical drive that has been marked as a hot spare and therefore is not a member of any RAID set. If a disk drive used in a volume set fails, then the hot spare will automatically take its place and he data previously
183
APPENDIX
located on the failed drive is reconstructed on the hot spare. Dedicated hot spare is assigned to serve one specified RAID set. Global hot spare is assigned to serve all RAID set on the RAID controller. Dedicated hot spare has higher priority than the global hot spare. For this feature to work properly, the hot spare must have at least the same capacity as the drive it replaces. The host spare function only works with RAID level 1, 1E, 3, 5, 6, 10, 30, 50, or 60 volume set. The Create Hot Spare option gives you the ability to define a global/dedicated hot spare disk drive. To effectively use the hot spare feature, you must always maintain at least one drive that is marked as a global hot spare. Important: The hot spare must have at least the same capacity as the drive it replaces.
The RAID controller chip includes a protection circuit that supports the replacement of SAS/SATA hard disk drives without having to shut down or reboot the system. A removable hard drive tray can deliver hot swappable fault-tolerant RAID solutions. This feature provides advanced fault tolerant RAID protection and online drive replacement.
If a disk drive is brought online into a system operating in degraded mode, the RAID controllers will automatically declare the new disk as a spare and begin rebuilding the degraded volume. The Auto Declare Hot-Spare function requires that the smallest drive contained within the volume set in which the failure occurred. In the normal status, the newly installed drive will be reconfigured an online free disk. But, the newly-installed drive is automatically assigned as a hot spare if any hot spare disk was used
184
APPENDIX
to rebuild and without new installed drive replaced it. In this condition, the Auto Declare Hot-Spare status will be disappeared if the RAID subsystem has since powered off/on. The Hot-Swap function can be used to rebuild disk drives in arrays with data redundancy such as RAID level 1, 1E, 3, 5, 6, 10, 30, 50 and 60.
Auto Rebuilding
If a hot spare is available, the rebuild starts automatically when a drive fails. The RAID controllers automatically and transparently rebuild failed drives in the background at user-definable rebuild rates. If a hot spare is not available, the failed disk drive must be replaced with a new disk drive so that the data on the failed drive can be automatically rebuilt and so that fault tolerance can be maintained. RAID controllers will automatically restart the system and rebuilding process if the system is shut down or powered off abnormally during a reconstruction procedure condition. When a disk is hot swapped, although the system is functionally operational, the system may no longer be fault tolerant. Fault tolerance will be lost until the removed drive is replaced and the rebuild operation is completed. During the automatic rebuild process, system activity will continue as normal, however, the system performance and fault tolerance will be affected.
Rebuilding a degraded volume incurs a load on the RAID subsystem. The RAID controllers allow the user to select the rebuild priority to balance volume access and rebuild tasks appropriately. The Background Task Priority is a relative indication of how much time the controller devotes to a background operation, such as rebuilding or migrating.
185
APPENDIX
RAID controller allows user to choose the task priority (Ultra Low (5%), Low (20%), Medium (50%), High (80%)) to balance volume set access and background tasks appropriately. For high array performance, specify an Ultra Low value. Like volume initialization, after a volume rebuilds, it does not require a system reboot.
High Reliability
In an effort to help users avoid data loss, disk manufacturers are now incorporating logic into their drives that acts as an "early warning system" for pending drive problems. This system is called SMART. The disk integrated controller works with multiple sensors to monitor various aspects of the drive's performance, determines from this information if the drive is behaving normally or not, and makes available status information to 6Gb/s SAS RAID controller firmware that probes the drive and look at it. The SMART can often predict a problem before failure occurs. The controllers will recognize a SMART error code and notify the administer of an impending hard drive failure.
Under normal operation, even initially defect-free drive media can develop defects. This is a common phenomenon. The bit density and rotational speed of disks is increasing every year, and so are the potential of problems. Usually a drive can internally remap bad sectors without external help using cyclic redundancy check (CRC) checksums stored at the end of each sector. The RAID controller drives perform automatic defect re-assignment for both read and write errors. Writes are always completed - if a location to be written is found to be defective, the drive will automatically relocate that write command to a new location and map out the defective location. If there is a recoverable read error, the correct data will be transferred to the host and that location will be tested by the drive to be certain the location is not
186
APPENDIX
defective. If it is found to have a defect, data will be automatically relocated, and the defective location is mapped out to prevent future write attempts. In the event of an unrecoverable read error, the error will be reported to the host and the location will be flagged as being potentially defective. A subsequent write to that location will initiate a sector test and relocation should that location prove to have a defect. Auto Reassign Sector does not affect disk subsystem performance because it runs as a background task. Auto Reassign Sector discontinues when the operating system makes a request.
Consistency Check
A consistency check is a process that verifies the integrity of redundant data. To verify RAID 3, 5, 6, 30, 50 or 60 redundancy, a consistency check reads all associated data blocks, computes parity, reads parity, and verifies that the computed parity matches the read parity. Consistency checks are very important because they detect and correct parity errors or bad disk blocks in the drive. A consistency check forces every block on a volume to be read, and any bad blocks are marked; those blocks are not used again. This is critical and important because a bad disk block can prevent a disk rebuild from completing. We strongly recommend that you run consistency checks on a regular basisat least once per week. Note that consistency checks degrade performance, so you should run them when the system load can tolerate it.
Data Protection
Battery Backup
The RAID controllers are armed with a Battery Backup Module (BBM). While a Uninterruptible Power Supply (UPS) protects most servers from power fluctuations or failures, a BBM provides an additional level of protection. In the event of a power failure, a BBM supplies power to retain data in the RAID controllers cache, thereby permitting any potentially dirty data in the cache to be flushed out to secondary storage when power is restored.
187
APPENDIX
The batteries in the BBM are recharged continuously through a trickle-charging process whenever the system power is on. The batteries protect data in a failed server for up to three or four days, depending on the size of the memory module. Under normal operating conditions, the batteries last for three years before replacement is necessary.
Recovery ROM
RAID controller firmware is stored on the flash ROM and is executed by the I/O processor. The firmware can also be updated through the RAID controllers PCIe 2.0 bus port or Ethernet port without the need to replace any hardware chips. During the controller firmware upgrade flash process, it is possible for a problem to occur resulting in corruption of the controller firmware. With our Redundant Flash Image feature, the controller will revert back to the last known version of firmware and continue operating. This reduces the risk of system failure due to firmware crash.
188
APPENDIX
Appendix F
Understanding RAID
RAID is an acronym for Redundant Array of Independent Disks. It is an array of multiple independent hard disk drives that provides high performance and fault tolerance. The RAID controller implements several levels of the Berkeley RAID technology. An appropriate RAID level is selected when the volume sets are defined or created. This decision should be based on the desired disk capacity, data availability (fault tolerance or redundancy), and disk performance. The following section discusses the RAID levels supported by the RAID controllers. The RAID controllers makes the RAID implementation and the disks physical configuration transparent to the host operating system. This means that the host operating system drivers and software utilities are not affected, regardless of the RAID level selected. Correct installation of the disk array and the controller requires a proper understanding of RAID technology and the concepts.
RAID 0
RAID 0, also referred to as striping, writes stripes of data across multiple disk drives instead of just one disk drive. RAID 0 does not provide any data redundancy, but does offer the best Highspeed data throughput. RAID 0 breaks up data into smaller blocks and then writes a block to each drive in the array. Disk striping enhances performance because multiple drives are accessed simultaneously; the reliability of RAID level 0 is less because the entire array will fail if any one disk drive fails.
189
APPENDIX
RAID 1
RAID 1 is also known as disk mirroring; data written on one disk drive is simultaneously written to another disk drive. Read performance will be enhanced if the array controller can, in parallel, access both members of a mirrored pair. During writes, there will be a minor performance penalty when compared to writing to a single disk. If one drive fails, all data (and software applications) are preserved on the other drive. RAID 1 offers extremely high data reliability, but at the cost of doubling the required data storage capacity.
190
APPENDIX
RAID 10(1E)
RAID 10(1E) is a combination of RAID 0 and RAID 1, combing stripping with disk mirroring. RAID Level 10 combines the fast performance of Level 0 with the data redundancy of level 1. In this configuration, data is distributed across several disk drives, similar to Level 0, which are then duplicated to another set of drive for data protection. RAID 10 has been traditionally implemented using an even number of disks, some hybrids can use an odd number of disks as well. Illustration is an example of a hybrid RAID 10(1E) array comprised of five disks; A, B, C, D and E. In this configuration, each strip is mirrored on an adjacent disk with wrap-around. Areca RAID 10 offers a little more flexibility in choosing the number of disks that can be used to constitute an array. The number can be even or odd.
RAID 3
RAID 3 provides disk striping and complete data redundancy though a dedicated parity drive. RAID 3 breaks up data into smaller blocks, calculates parity by performing an exclusive-or on the blocks, and then writes the blocks to all but one drive in the array. The parity data created during the exclusive-or is then written to the last drive in the array. If a single drive fails, data is still available by computing the exclusive-or of the contents corresponding strips of the surviving member disk. RAID 3 is best for applications that require very fast data- transfer rates or long data blocks.
191
APPENDIX
RAID 5
RAID 5 is sometimes called striping with parity at byte level. In RAID 5, the parity information is written to all of the drives in the controllers rather than being concentrated on a dedicated parity disk. If one drive in the system fails, the parity information can be used to reconstruct the data from that drive. All drives in the array system can be used for seek operations at the same time, greatly increasing the performance of the RAID system. This relieves the write bottleneck that characterizes RAID 4, and is the primary reason that RAID 5 is more often implemented in RAID arrays.
192
APPENDIX
RAID 6
RAID 6 provides the highest reliability. It is similar to RAID 5, but it performs two different parity computations or the same computation on overlapping subsets of the data. RAID 6 can offer fault tolerance greater than RAID 1 or RAID 5 but only consumes the capacity of 2 disk drives for distributed parity data. RAID 6 is an extension of RAID 5 but uses a second, independent distributed parity scheme. Data is striped on a block level across a set of drives, and then a second set of parity is calculated and written across all of the drives.
RAID x0
RAID level-x0 refers to RAID level 00, 100, 30, 50 and 60. RAID x0 is a combination multiple RAID x volume sets with RAID 0 (striping). Striping helps to increase capacity and performance without adding disks to each RAID x array. The operating system uses the spanned volume in the same way as a regular volume. Up to one drive in each sub-volume (RAID 3 or 5) may fail without loss of data. Up to two drives in each sub-volume (RAID 6) may fail without loss of data. RAID level x0 allows more physical drives in an array. The benefits of doing so are larger volume sets, increased performance, and increased reliability. The following illustration is an example of a RAID level x0 logical drive.
193
APPENDIX
Important: RAID level 00, 100, 30, 50 and 60 can support up to eight RAID set. If volume is RAID level 00, 100, 30, 50, or 60, you cant change the volume to another RAID level. If volume is RAID level 0, 1, 10(1E), 3, 5, or 6, you cant change the volume to RAID level 00, 100, 30, 50, or 60.
JBOD
(Just a Bunch Of Disks) A group of hard disks in a RAID box are not set up as any type of RAID configuration. All drives are available to the operating system as an individual disk. JBOD does not provide data redundancy.
194
APPENDIX
Summary of RAID Levels
6Gb/s SAS RAID controller supports RAID Level 0, 1, 10(1E), 3, 5, 6, 30, 50 and 60. The following table provides a summary of RAID levels.
RAID Level Comparsion
RAID Level 0 Description Disks Requirement (Minimum) 1 Data Availability No data Protection
Also known as striping. Data distributed across multiple drives in the array. There is no data protection. Also known as mirroring. All data replicated on 2 separated disks. N is almost always 2. Due to this is a 100 % duplication, so is a high costly solution. Also known as mirroring and striping. Data is written to two disks simultaneously, and allows an odd number or disk. Read request can be satisfied by data read from wither one disk or both disks. Also known Bit-Interleaved Parity. Data and parity information is subdivided and distributed across all data disks. Parity information normally stored on a dedicated parity disk. Also known Block-Interleaved Distributed Parity. Data and parity information is subdivided and distributed across all disk. Parity information normally is interspersed with user data. RAID 6 provides highest reliability, but not widely used. Similar to RAID 5, but does two different parity computations or the same computation on overlapping subsets of the data. The RAID 6 can offer fault tolerance greater that RAID 1 or RAID 5 but only consumes the capacity of 2 disk drives for distributed parity data.
10(1E)
195
APPENDIX
30 RAID 30 is a combination multiple RAID 3 volume sets with RAID 0 (striping) RAID 50 is a combination multiple RAID 5 volume sets with RAID 0 (striping) RAID 60 is a combination multiple RAID 6 volume sets with RAID 0 (striping) 6 Up to one disk failure in each sub-volume Up to one disk failure in each sub-volume Up to two disk failure in each sub-volume
50
60
196
HISTORY
Version History
Revision
1.1 1.1 1.1 1.1
Page
p.152 P.142 p.146 p.121
Description
Added SAS Chip Information function Added Advanced Configuration function Added SATA Power Up in Standby Added "Create Raid Set" Context
197