Chapter 10: Mass-Storage Systems: Silberschatz, Galvin and Gagne ©2013 Operating System Concepts - 9 Edition
Chapter 10: Mass-Storage Systems: Silberschatz, Galvin and Gagne ©2013 Operating System Concepts - 9 Edition
Chapter 10: Mass-Storage Systems: Silberschatz, Galvin and Gagne ©2013 Operating System Concepts - 9 Edition
Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space Management RAID Structure Stable-Storage Implementation
10.2
Objectives
To describe the physical structure of secondary storage devices and its effects on the uses of the devices To explain the performance characteristics of mass-storage devices To evaluate disk scheduling algorithms To discuss operating-system services provided for mass storage, including RAID
10.3
Drives rotate at 60 to 250 times per second Transfer rate is rate at which data flow between drive and computer Positioning time (random-access time) is time to move disk arm to desired cylinder (seek time) and time for desired sector to rotate under the disk head (rotational latency) Head crash results from disk head making contact with the disk surface
Thats bad
Busses vary, including EIDE, ATA, SATA, USB, Fibre Channel, SCSI, SAS, Firewire Host controller in computer uses bus to talk to disk controller built into drive or storage array
10.4
10.5
Magnetic Disks
Transfer Rate theoretical 6 Gb/sec Effective Transfer Rate real 1Gb/sec Seek time from 3ms to 12ms 9ms common for desktop drives
1/(RPM * 60)
(From Wikipedia)
10.6
Access Latency = Average access time = average seek time + average latency
For fastest disk 3ms + 2ms = 5ms For slow disk 9ms + 5.56ms = 14.56ms
Average I/O time = average access time + (amount to transfer / transfer rate) + controller overhead For example to transfer a 4KB block on a 7200 RPM disk with a 5ms average seek time, 1Gb/sec transfer rate with a .1ms controller overhead =
10.7
10.8
Solid-State Disks
Can be more reliable than HDDs More expensive per MB Maybe have shorter life span Less capacity But much faster Busses can be too slow -> connect directly to PCI for example No moving parts, so no seek time or rotational latency
10.9
Magnetic Tape
Relatively permanent and holds large quantities of data Access time slow Random access ~1000 times slower than disk Mainly used for backup, storage of infrequently-used data, transfer medium between systems Kept in spool and wound or rewound past read-write head Once data under head, transfer rates comparable to disk
200GB to 1.5TB typical storage Common technologies are LTO-{3,4,5} and T10000
10.10
Disk Structure
Disk drives are addressed as large 1-dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer
The 1-dimensional array of logical blocks is mapped into the sectors of the disk sequentially
Except for bad sectors Non-constant # of sectors per track via constant angular velocity
10.11
Disk Attachment
Host-attached storage accessed through I/O ports talking to I/O busses SCSI itself is a bus, up to 16 devices on one cable, SCSI initiator requests operation and SCSI targets perform tasks
Each target can have up to 8 logical units (disks attached to device controller)
Can be switched fabric with 24-bit address space the basis of storage area networks (SANs) in which many hosts attach to many storage units
10.12
Storage Array
Can just attach disks, or arrays of disks Storage Array has controller(s), provides features to attached host(s)
Ports to connect hosts to array Memory, controlling software (sometimes NVRAM, etc) A few to thousands of disks RAID, hot spares, hot swap (discussed later) Shared storage -> more efficiency Features found in some file systems
10.13
Common in large storage environments Multiple hosts attached to multiple storage arrays - flexible
10.14
Hosts also attach to the switches Storage made available via LUN Masking from specific arrays to specific servers Easy to add or remove storage, add new host and allocate it storage
10.15
Network-Attached Storage
Network-attached storage (NAS) is storage made available over a network rather than over a local connection (such as a bus)
10.16
Disk Scheduling
The operating system is responsible for using hardware efficiently for the disk drives, this means having a fast access time and disk bandwidth Minimize seek time Seek time seek distance Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer
10.17
I/O request includes input or output mode, disk address, memory address, number of sectors to transfer OS maintains queue of requests, per disk or device Idle disk can immediately work on I/O request, busy disk means work must queue
Note that drive controllers have small buffers and can manage a queue of I/O requests (of varying depth)
Several algorithms exist to schedule the servicing of disk I/O requests The analysis is true for one or many platters
10.18
FCFS
Illustration shows total head movement of 640 cylinders
10.19
SSTF
Shortest Seek Time First selects the request with the minimum seek time from the current head position SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests Illustration shows total head movement of 236 cylinders
10.20
SSTF (Cont.)
10.21
SCAN
The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues.
10.22
SCAN (Cont.)
10.23
C-SCAN
Provides a more uniform wait time than SCAN The head moves from one end of the disk to the other, servicing requests as it goes
When it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip
Treats the cylinders as a circular list that wraps around from the last cylinder to the first one
10.24
C-SCAN (Cont.)
10.25
C-LOOK
LOOK a version of SCAN, C-LOOK a version of C-SCAN Arm only goes as far as the last request in each direction, then reverses direction immediately, without first going all the way to the end of the disk Total number of cylinders?
10.26
C-LOOK (Cont.)
10.27
SSTF is common and has a natural appeal SCAN and C-SCAN perform better for systems that place a heavy load on the disk
Less starvation
Performance depends on the number and types of requests Requests for disk service can be influenced by the file-allocation method
The disk-scheduling algorithm should be written as a separate module of the operating system, allowing it to be replaced with a different algorithm if necessary Either SSTF or LOOK is a reasonable choice for the default algorithm What about rotational latency?
10.28
Disk Management
Low-level formatting, or physical formatting Dividing a disk into sectors that the disk controller can read and write
Each sector can hold header information, plus data, plus error correction code (ECC)
To use a disk to hold files, the operating system still needs to record its own data structures on the disk
Raw disk access for apps that want to do their own block management, keep OS out of the way (databases for example) Boot block initializes system
The bootstrap is stored in ROM Bootstrap loader program stored in boot blocks of boot partition
10.29
10.30
Swap-Space Management
Swap-space can be carved out of the normal file system, or, more commonly, it can be in a separate disk partition (raw) Swap-space management
4.3BSD allocates swap space when process starts; holds text segment (the program) and data segment Kernel uses swap maps to track swap-space use Solaris 2 allocates swap space only when a dirty page is forced out of physical memory, not when the virtual memory page is first created
File data written to swap space until write to file system requested Other dirty pages go to swap space due to no other home
Text segment pages thrown out and reread from the file system as needed
What if a system runs out of swap space? Some systems allow multiple swap spaces
10.31 Silberschatz, Galvin and Gagne 2013
10.32
RAID Structure
Mean time to data loss is 100, 0002 / (2 10) = 500 106 hours, or 57,000 years!
Frequently combined with NVRAM to improve write performance RAID is arranged into six different levels
10.33
RAID (Cont.)
Several improvements in disk-use techniques involve the use of multiple disks working cooperatively Disk striping uses a group of disks as one storage unit RAID schemes improve performance and improve the reliability of the storage system by storing redundant data
Mirroring or shadowing (RAID 1) keeps duplicate of each disk Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1) provides high performance and high reliability Block interleaved parity (RAID 4, 5, 6) uses much less redundancy
RAID within a storage array can still fail if the array fails, so automatic replication of the data between arrays is common Frequently, a small number of hot-spare disks are left unallocated, automatically replacing a failed disk and having data rebuilt onto them
10.34
RAID Levels
10.35
RAID (0 + 1) and (1 + 0)
10.36
Other Features
Regardless of where RAID implemented, other useful features can be added Snapshot is a view of file system before a set of changes take place (i.e. at a point in time)
More in Ch 12
Hot spare disk is unused, automatically used by RAID production if a disk fails to replace the failed disk and rebuild the RAID set if possible
10.37
Extensions
RAID alone does not prevent or detect data corruption or other errors, just disk failures Solaris ZFS adds checksums of all data and metadata Checksums kept with pointer to object, to detect if object is the right one and whether it changed Can detect and correct data and metadata corruption ZFS also removes volumes, partitions
Disks allocated in pools Filesystems with a pool share that pool, use and release space like malloc() and free() memory allocate / release calls
10.38
10.39
10.40
Stable-Storage Implementation
Write-ahead log scheme requires stable storage Stable storage means data is never lost (due to failure, etc) To implement stable storage:
Replicate information on more than one nonvolatile storage media with independent failure modes Update information in a controlled manner to ensure that we can recover the stable data after any failure during data transfer or recovery
1.
Disk write has 1 of 3 outcomes Successful completion - The data were written correctly on disk
2.
3.
Partial failure - A failure occurred in the midst of transfer, so only some of the sectors were written with the new data, and the sector being written during the failure may have been corrupted
Total failure - The failure occurred before the disk write started, so the previous data values on the disk remain intact If failure occurs during block write, recovery procedure restores block to consistent state
System maintains 2 physical blocks per logical block and does the following:
1.
2. 3.
10.41
End of Chapter 10