How Basic and Dynamic Disks and Volumes Work

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 56

What Are Basic Disks and Volumes?

Updated: March 28, 2003 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 In this section

Basic Disk and Volume Scenarios

Basic disks and basic volumes are the storage types most often used with Windows operating systems. The term basic disk refers to a disk that contains basic volumes, such as primary partitions and logical drives. The term basic volume refers to a partition on a basic disk. Basic disks, which are found in both x86-based and Itanium-based computers, provide a simple, elegant storage solution that can accommodate changing storage requirements. Basic disks support clustered disks, Institute of Electrical and Electronics Engineers (IEEE) 1394 disks, and universal serial bus (USB) removable drives. In x86-based computers running Windows Server 2003, basic disks use the same Master Boot Record (MBR) partition style as the disks used by Microsoft MS-DOS, and all previous versions of Microsoft Windows. Itanium-based computers also support basic disks, but you can choose from two partition styles (MBR or GPT) for each basic disk. You can create up to 128 volumes on an MBR or GPT disk. The partition style determines the operating systems that can access the disk. Before you can create simple volumes, spanned volumes, or volumes that use redundant array of independent disks (RAID) technology (striped volumes, mirrored volumes, and RAID-5 volumes) you must convert a basic disk to a dynamic disk. Windows Server 2003 supports the following types of basic volumes:

Primary partitions (master boot record (MBR) and GUID partition table (GPT) disks) Logical drives within extended partitions (MBR disks only)

The number of basic volumes you can create on a basic disk depends on the partition style of the disk:

On MBR disks, you can create up to four primary partitions, or you can create up to three primary partitions and one extended partition. Within the extended partition, you can create up to 128 logical drives. On GPT disks, you can create up to 128 partitions. Because GPT disks do not limit you to four partitions, extended partitions and logical drives are not available on GPT disks. If you want to add more space to existing primary partitions and logical drives, you can extend the volume using the extend command in DiskPart.

Basic Disk and Volume Scenarios


Basic disks and volumes can be scaled to match your storage needs. Basic disks and volumes are commonly used in the following scenarios.

Home or business desktop computer with one disk

Most home and business users require a basic disk and one basic volume for storage, and do not require a computer with volumes that span multiple disks or that provide fault-tolerance. This is the best choice for those who require simplicity and ease of use.

Home or business desktop computer with one disk and more than one volume
If a home or small business user wants to upgrade the operating system without losing their personal data, they should store the operating system in a separate location from their personal data. In this scenario, a basic disk with two or more basic volumes is required. The user can install the operating system on the first volume, creating a boot volume or system volume, and use the second volume to store data. When a new version of the operating system is released, the user can reformat the boot or system volume and install the new operating system. Their personal data, located on the second volume, remains untouched.

Business server with one disk and multiple volumes and logical drives
If a small business operates a file server and requires multiple volumes for file sharing and file security, the system administrator can create up to three primary partitions and one extended partition with up to 128 logical drives. In this scenario, each of the partitions and logical drives receives its own drive letter so that each of these volumes can be individually secured to limit access to specific, authorized users. For example, perhaps each department within this business requires its own volume. The business could create individual volumes and grant permissions to members of those departments. Data shared by members of human resources, for example, could be kept separate from the data used by members of the accounting, sales, or marketing departments. As storage needs and the importance of the data stored on disk increase, the logical next step for this business would be to add additional disks to the server, convert the disks to dynamic, and then create faulttolerant mirrored- or RAID-5 volumes. How Basic Disks and Volumes Work Updated: March 28, 2003 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 In this section

Basic Disks and Volumes Architecture Basic Disk and Volume Interfaces Basic Disks and Volumes Physical Structure Disk Sectors on GPT Disks Basic Disks and Volumes Processes and Interactions

Basic disks and basic volumes are the storage types most often used with Windows operating systems. The term basic disk refers to a disk that contains basic volumes, such as primary partitions and logical drives. The term basic volume refers to a partition on a basic disk. Basic disks, which are found in both x86-based and Itanium-based computers, provide a simple, elegant storage solution that can accommodate changing storage requirements. Basic disks support clustered disks, Institute of Electrical and Electronics Engineers (IEEE) 1394 disks, and universal serial bus (USB) removable drives. Before you can use a basic disk, it

must have a disk signature and be formatted with either the File Allocation Table (FAT) or NT file system (NTFS) file systems. An optimal environment for basic disks and volumes is defined as follows:

Windows Server 2003 operating system is installed and functioning properly. The basic disk is functioning properly and displays the Healthy status in the Disk Management snap-in.

The following sections provide an in-depth view of how basic disks and volumes work in an optimal environment.

Basic Disks and Volumes Architecture


Basic disks and volumes rely on the Logical Disk Manager (LDM) and Virtual Disk Service (VDS) and their associated components. These components enable you to perform tasks such as converting basic disks into dynamic disks, and creating fault-tolerant volumes. The following diagram shows the LDM and VDS components. Logical Disk Manager and Virtual Disk Service Components

The following table lists the LDM and VDS components and provides a brief description of each. Logical Disk Manager and Virtual Disk Service Components

Component Disk Management snapin: Dmdlgs.dll Dmdskmgr.dll Dmview.ocx

Description

Binaries that comprise the Disk Management snap-in user interface.

Diskmgmt.msc DiskPart command line A scriptable alternative to the Disk Management snap-in. utility:

Diskpart.exe Mount Manager command line: Mountvol.exe Virtual Disk Service: Vds.exe

A command line utility that can be used to create, delete, or list volume mount points

A program used to configure and maintain volume and disk storage.

Vdsutil.dll Virtual Disk Service provider for basic disks The Virtual Disk Service calls into the basic provider when configuring basic disks and volumes: and volumes. Vdsbas.dll Virtual Disk Service provider for dynamic disks and volumes: Vdsdyndr.dll Dmboot.sys Dmconfig.dll Dmintf.dll Dmio.sys Dmload.sys Dmremote.exe Dmutil.dll Logical Disk Administrator service: Dmadmin.exe Logical Disk Service: Dmserver.dll Basic disk I/O driver: Ftdisk.sys Mount point manager driver: Mountmgr.sys Partition manager: Partmgr.sys A service that detects and monitors new hard disk drives and sends disk volume information to Logical Disk Manager Administrative Service for configuration. If this service is stopped, dynamic disk status and configuration information might become outdated. If this service is disabled, any services that explicitly depend on it will fail to start. A driver that manages all I/O for basic disks. Other system components, such as mount point manager, call into this driver to get information about basic disk volumes. A binary that tracks drive letters, folder mount paths and other mount points for volumes. Assigns a unique volume mount point of the form \??\Volume<GUID> to each volume, in addition to any drive letters or folder paths that have been assigned by the user. Ensures that a volume will get the same drive letter each time the computer boots, and also tries to retain a volumes drive letter when the volumes disk is moved to a new computer. A filter driver that sits on top of the disk driver. All disk driver requests pass through the partition manager driver. This driver creates partition devices and notifies the volume managers of partition arrivals and removals. Exposes IOCTLs Drivers and user mode components used to configure dynamic disks and volumes and perform I/O.

The Virtual Disk Service calls into the dynamic provider when configuring dynamic disks and volumes.

The VDS provider for dynamic disks and volumes uses the interfaces exposed by this service to configure dynamic disks.

that return information about partitions to other components, and allow partition configuration.

Basic Disk and Volume Interfaces


Basic disks and volumes can be managed using the Disk Management snap-in. The following table lists the control codes used by the Disk Management snap-in when managing disks and volumes. For more information about the interfaces used by Disk Management when managing disks and volumes, see Disk Management Reference on MSDN. Disk Management Control Codes

Operation Initializes the specified disk and disk partition table using IOCTL_DISK_CREATE_DISK the specified information. IOCTL_DISK_DELETE_DRIVE_LAYOUT Removes the boot signature from the master boot record. IOCTL_DISK_GET_DRIVE_GEOMETRY_EX Retrieves information about the physical disks geometry. Retrieves information about the number of partitions on a IOCTL_DISK_GET_DRIVE_LAYOUT_EX disk and the features of each partition. Retrieves the length of the specified disk, volume, or IOCTL_DISK_GET_LENGTH_INFO partition. Retrieves partition information for AT and EFI IOCTL_DISK_GET_PARTITION_INFO_EX (Extensible Firmware Interface) partitions. IOCTL_DISK_GROW_PARTITION Enlarges the specified partition. IOCTL_DISK_SET_DRIVE_LAYOUT_EX Partitions a disk. IOCTL_DISK_SET_PARTITION_INFO_EX Sets the disk partition type. For more information about the Disk Management control codes, see Disk Management Control Codes on MSDN.

Control Code

Basic Disks and Volumes Physical Structure


Basic disks can use either the master boot record (MBR) or GUID partition table (GPT) partitioning style. x86-based computers use disks with the MBR partitioning style and Itanium-based computers use disks with the GPT partitioning style. The following figure compares a basic MBR disk to a basic GPT disk. Comparison of MBR and GPT Disks

A comparison of MBR and GPT disks is listed in the following table. Comparison of MBR and GPT Disks

Characteristic

MBR Disk (x86-based Computer) Supports up to:

GPT Disk (Itanium-based Computer)

Number of partitions on basic disks

Four primary partitions per disk, or Three primary partitions and an extended partition with up to 128 logical drives

Supports up to 128 partitions

Compatible Can be read by: operating systems Microsoft MS-DOS


Can be read by:


Windows XP 64-Bit Edition The 64-bit version of Windows Server 2003, Enterprise Edition The 64-bit version of Windows Server 2003, Datacenter

Microsoft Windows 95 Microsoft Windows 98 Microsoft Windows Millennium Edition

Windows NT, all versions Windows 2000, all versions Windows XP Windows Server 2003, all versions for x86-based computers and Itanium-based computers 2 terabytes Edition

Maximum size of 2 terabytes basic volumes Partition tables (copies)

Contains primary and backup partition tables Contains one copy of the partition table. for redundancy and checksum fields for improved partition structure integrity. Stores data in partitions and in Stores user and program data in partitions that unpartitioned space. Although most data are visible to the user. Stores data that is Locations for data is stored within partitions, some data critical to platform operation in partitions that storage might be stored in hidden or the 64-bit versions of Windows Server 2003 unpartitioned sectors created by OEMs recognize but do not make visible to the user. or other operating systems. Does not store data in unpartitioned space. Troubleshooting Uses the same methods and tools used in Uses tools designed for GPT disks. (Do not use methods Windows 2000. MBR troubleshooting tools on GPT disks.)

Master Boot Record on Basic Disks


The master boot record (MBR), the most important data structure on the disk, is created when the disk is partitioned. The MBR contains a small amount of executable code called the master boot code, the disk signature, and the partition table for the disk. At the end of the MBR is a 2-byte structure called a signature word or end of sector marker, which is always set to 0x55AA. A signature word also marks the end of an extended boot record (EBR) and the boot sector. The disk signature, a unique number at offset 0x01B8, identifies the disk to the operating system. Windows Server 2003 uses the disk signature as an index to store and retrieve disk information, such as drive letters, in the registry. Master boot code The master boot code performs the following activities: 1. Scans the partition table for the active partition. 2. Finds the starting sector of the active partition. 3. Loads a copy of the boot sector from the active partition into memory. 4. Transfers control to the executable code in the boot sector. If the master boot code cannot complete these functions, the system displays a message similar to one of the following:
Invalid partition table. Error loading operating system. Missing operating system.

Note

Floppy disks and removable disks, such as Iomega Zip disks, do not contain an MBR. The first sector on these disks is the boot sector. Although every hard disk contains an MBR, the master boot code is used only if the disk contains the active primary partition.

Partition Table on Basic Disks The partition table, which is a 64-byte data structure that is used to identify the type and location of partitions on a hard disk, conforms to a standard layout, independent of the operating system. Each partition table entry is 16 bytes long, with a maximum of four entries. Each entry starts at a predetermined offset from the beginning of the sector, as follows:

Partition 1 0x01BE (446) Partition 2 0x01CE (462) Partition 3 0x01DE (478) Partition 4 0x01EE (494)

The following example shows a partial printout of an MBR revealing the partition table from a computer with three partitions. When there are fewer than four partitions on a disk, the remaining partition table fields are set to the value 0.
000001B0: 000001C0: 000001D0: 000001E0: 000001F0: 01 81 C1 00 00 0A FF 00 07 07 05 00 FE FE FE 00 BF FF FF 00 09 FF FF 00 3F 8A C7 00 00 F5 1B 00 00 7F 1C 00 00 00 01 00 4B 3D D6 00 F5 26 96 00 7F 9C 92 00 80 01 00 00 00 00 00 00 00 00 00 00 .. ......?...K.... .........=&.... ................ ..............

The following figure provides an example of how to interpret the sector printout of the partition table. The Boot Indicator, System ID, Relative Sectors, and Total Sectors values correspond to the values described in the table titled Partition Table Fields later in this section. Interpreting Data in the Partition Table

The following table describes the fields in each entry in the partition table. The Sample Values correspond to the first partition table entry shown in the previous example. The Byte Offset values correspond to the addresses of the first partition table entry. There are three additional entries whose values can be calculated by adding 10h to the byte offset value specific for each additional partition table entry (for example, add 20h for partition table entry 3 and 30h for partition table entry 4). The following table and sections following the table provide additional detail about these fields. Partition Table Fields

Byte Offset

Field Length

Sample Value1

Field Name and Definition

Boot Indicator. Indicates whether the volume is the active partition. Legal values include: 0x01BE 1 byte 80

00. Do not use for booting. 80. Active partition.

0x01BF 1 byte 0x01C0 0x01C1 0x01C2 0x01C3 0x01C4 0x01C5 0x01C6 6 bits 10 bits 1 byte 1 byte 6 bits 10 bits 4 bytes

01

0x01CA 4 bytes

Starting Head. Starting Sector. Only bits 05 are used. The upper two bits, 6 and 7, are 01 *2 used by the Starting Cylinder field. Starting Cylinder. Uses 1 byte in addition to the upper 2 bits from the 00 * Starting Sector field to make up the cylinder value. The Starting Cylinder is a 10-bit number that has a maximum value of 1023. System ID. Defines the volume type. See the table titled System ID 07 Values later in this section for sample values. FE Ending Head. Ending Sector. Only bits 05 are used. The upper two bits, 6 and 7, are BF * used by the Ending Cylinder field. Ending Cylinder. Uses 1 byte in addition to the upper 2 bits from the 09 * Ending Sector field to make up the cylinder value. The Ending Cylinder is a 10-bit number, with a maximum value of 1023. 3F 00 00 Relative Sectors. The offset from the beginning of the disk to the beginning 00 of the volume, counting by sectors. 4B F5 7F Total Sectors. The total number of sectors in the volume. 00

1. Numbers larger than one byte are stored in little endian format, or reverse-byte ordering. Little endian format is a method of storing a number so that the least significant byte appears first in the hexadecimal number notation. For example, the sample value for the Relative Sectors field in the previous table, 3F 00 00 00, is a little endian representation of 0x0000003F. The decimal equivalent of this little endian number is 63. 2. Sample values marked with an asterisk (*) do not accurately represent the value of the fields, because the fields are either 6 bits or 10 bits and the data is recorded in bytes. Boot Indicator Field The first element of the partition table, the Boot Indicator field, indicates whether the volume is the active partition. Only one primary partition on the disk can have this field set. See the previous table for the acceptable values. It is possible to have different operating systems and different file systems on different volumes. By using disk configuration tools, such as the Windows Server 2003-based Disk Management and DiskPart, or the MS-DOS-based Fdisk, to designate a primary partition as active, the Boot Indicator field for that partition is set in the partition table. System ID Field Another element of the partition table is the System ID field. It defines which file system, such as FAT16, FAT32, or NTFS, was used to format the volume. The System ID field also identifies an extended partition, if one is defined. Windows Server 2003 uses the System ID field to determine which file system device drivers to load during startup. The following table identifies the values for the System ID field. System ID Values

Partition Type ID Value 0x01 FAT12 primary partition or logical drive (fewer than 32,680 sectors in the volume) 0x04 FAT16 partition or logical drive (32,68065,535 sectors or 16 MB33 MB) 0x05 Extended partition 0x06 BIGDOS FAT16 partition or logical drive (33 MB4 GB) 0x07 Installable File System (NTFS partition or logical drive) 0x0B FAT32 partition or logical drive 0x0C FAT32 partition or logical drive using BIOS INT 13h extensions 0x0E BIGDOS FAT16 partition or logical drive using BIOS INT 13h extensions 0x0F Extended partition using BIOS INT 13h extensions 0x12 EISA partition or OEM partition 0x42 Dynamic volume 0x84 Power management hibernation partition 0x86 Multidisk FAT16 volume created by using Windows NT 4.0 0x87 Multidisk NTFS volume created by using Windows NT 4.0 0xA0 Laptop hibernation partition 0xDE Dell OEM partition 0xFE IBM OEM partition 0xEE GPT partition 0xEF EFI System partition on an MBR disk Windows Server 2003 does not support multidisk volumes that are created by using Windows NT 4.0 or earlier, and that use System ID values 0x86, 0x87, 0x8B, or 0x8C. Before you upgrade from Windows NT Server 4.0 to Windows Server 2003, you must first back up and then delete all multidisk volumes. After you complete the upgrade, create dynamic volumes and restore the data. If you do not delete the multidisk volumes before beginning Setup, you must use the Ftonline tool, which is part of Windows Support Tools, to access the volume after Setup completes. If you are upgrading from Windows 2000 to Windows Server 2003, you must convert the multidisk volumes to dynamic before you begin Setup, or Setup does not continue. MS-DOS can only access volumes that have a System ID value of 0x01, 0x04, 0x05, or 0x06. However, you can delete volumes that have the other values listed in the table titled System ID Values earlier in this section by using Disk Management, DiskPart, or the MS-DOS tool Fdisk. Starting and Ending Cylinder, Head, and Sector Fields The Starting and Ending Cylinder, Head, and Sector fields (collectively known as the CHS fields) are additional elements of the partition table. These fields are essential for starting the computer. The master boot code uses these fields to find and load the boot sector of the active partition. The Starting CHS fields for non-active partitions point to the boot sectors of the remaining primary partitions and the extended boot record (EBR) of the first logical drive in the extended partition, as shown in the figure titled Interpreting Data in the Partition Table earlier in this section. Knowing the starting sector of an extended partition is very important for low-level disk troubleshooting. If your disk fails, you need to work with the partition starting point (among other factors) to retrieve stored data. The Ending Cylinder field in the partition table is 10 bits long, which limits the number of cylinders that can be described in the partition table to a range of 0 through 1,023. The Starting Head and Ending Head fields are each one byte long, which limits the field range from 0 through 255. The Starting Sector and

Ending Sector fields are each six bits long, which limits the range of these fields from 0 through 63. However, the enumeration of sectors starts at 1 (not 0, as for other fields), so the maximum number of sectors per track is 63. Because Windows Server 2003 supports hard disks that are low-level formatted with a standard 512-byte sector, the maximum disk capacity described by the partition table is calculated as follows: Maximum capacity = sector size cylinders (10 bits) heads (8 bits) sectors per track (6 bits). Using the maximum possible values yields: 512 1024 256 63 (or 512 x 224) = 8,455,716,864 bytes or 7.8 gigabytes (GB). Windows Server 2003 and other Windows-based operating systems that support BIOS INT 13h extensions can access partitions that exceed the first 7.8 GB of the disk by ignoring the Starting and Ending CHS fields in favor of the Relative Sectors and Total Sectors fields. Windows 2000 and Windows Server 2003 ignore the Starting and Ending CHS fields regardless of whether the partition exceeds the first 7.8 GB of the disk. However, Windows Server 2003 must place the appropriate values in the Starting and Ending CHS fields because Windows 95, Windows 98, and Windows Millennium Edition (which all support BIOS INT 13h extensions) use the Starting and Ending CHS fields if the partition does not exceed the first 7.8 GB of the disk. These fields are also required to maintain compatibility with the BIOS INT 13h for startup. MS-DOS and other Windows operating systems that do not support BIOS INT 13h extensions ignore partitions that exceed the 7.8 GB boundary because these partitions use a System ID that is recognized only by operating systems that support BIOS INT 13h extensions. Both the operating system and the computer must support BIOS INT 13h extensions if you want to create partitions that exceed the first 7.8 GB of the disk. Relative Sectors and Total Sectors Fields The Relative Sectors field represents the offset from the beginning of the disk to the beginning of the volume, counting by sectors, for the volume described by the partition table entry. The Total Sectors field represents the total number of sectors in the volume. Using the Relative Sectors and Total Sectors fields (resulting in a 32-bit number) provides eight more bits than the CHS scheme to represent the total number of sectors. This allows you to create partitions that contain up to 232 sectors. With a standard sector size of 512 bytes, the 32 bits used to represent the Relative Sectors and Total Sectors fields translates into a maximum partition size of 2 terabytes (or 2,199,023,255,552 bytes). Note

For more information about the maximum partition size that each file system supports, see NTFS Technical Reference.

The following figure shows the MBR, partition table, and boot sectors on a basic disk with four partitions. The definitions of the fields in the partition table and the extended partition tables are the same. Detail of a Basic Disk with Four Partitions

Extended Boot Record on Basic Disks An extended boot record (EBR), which consists of an extended partition table and the signature word for the sector, exists for each logical drive in the extended partition. It contains the only information on the first side of the first cylinder of each logical drive in the extended partition. The boot sector in a logical drive is usually located at either Relative Sector 32 or 63. However, if there is no extended partition on a disk, there are no EBRs and no logical drives. The first entry in an extended partition table for the first logical drive points to its own boot sector. The second entry points to the EBR of the next logical drive. If no further logical drives exist, the second entry is not used and is recorded as a series of zeros. If there are additional logical drives, the first entry of the extended partition table for the second logical drive points to its own boot sector. The second entry of the extended partition table for the second logical drive points to the EBR of the next logical drive. The third and fourth entries of an extended partition table are never used. As shown in the following figure, the EBRs of the logical drives in the extended partition are a linked list. The figure shows three logical drives on an extended partition, illustrating the difference in extended partition tables between preceding logical drives and the last logical drive. Detail of an Extended Partition on a Basic Disk

With the exception of the last logical drive on the extended partition, the format of the extended partition table, which is described in the following table, is repeated for each logical drive: the first entry identifies the logical drives own boot sector and the second entry identifies the next logical drives EBR. The extended partition table for the last logical drive has only its own partition entry listed. The second through fourth entries of the last extended partition table are not used. Contents of Extended Partition Table Entries

Contents Information about the current logical drive in the extended partition, including the starting address First for the boot sector preceding the data. Information about the next logical drive in the extended partition, including the address of the sector Second that contains the EBR for the next logical drive. If no additional logical drives exist, this field is not used. Third Not used. Fourth Not used. The fields in each entry of the extended partition table are identical to the MBR partition table entries. For more information about partition table fields, see the table titled Partition Table Fields earlier in this section. The Relative Sectors field in an extended partition table entry shows the number of bytes that are offset from the beginning of the extended partition to the first sector in the logical drive. The number in the Total Sectors field refers to the number of sectors that make up the logical drive. The value of the Total Sectors field equals the number of sectors from the boot sector defined by theextended partition table entry to the end of the logical drive.

Entry

Because of the importance of the MBR and EBR sectors, it is recommended that you run disk-scanning tools regularly and that you regularly back up all your data files to protect against losing access to a volume or an entire disk.

Disk Sectors on GPT Disks


GUID partition table (GPT) disks use primary and backup partition structures to provide redundancy. These structures are located at the beginning and the end of the disk. GPT identifies these structures by their logical block address (LBA) rather than by their relative sectors. Using this scheme, sectors on a disk are numbered from 0 to n-1, where n is the number of sectors on the disk. As shown in the following figure, the first structure on a GPT disk is the Protective MBR in LBA 0, followed by the primary GPT header in LBA 1. The GPT header is followed by the primary GUID partition entry array, which includes a partition entry for each partition on the disk. Partitions on the disk are located between the primary and backup GUID partition entry arrays. The partitions must be placed within the first usable and last usable LBAs, as specified in the GPT partition header. Partition Structures on a GPT Disk

Protective MBR

The Extensible Firmware Interface (EFI) specification requires that LBA 0 be reserved for compatibility code and a Protective MBR. The Protective MBR has the same format as an existing MBR, and it contains one partition entry with a System ID value of 0xEE. This entry reserves the entire space of the disk, including the space used by the GPT header, as a single partition. The Protective MBR is included to prevent disk utilities that were designed for MBR disks from interpreting the disk as having available space and overwriting GPT partitions. The Protective MBR is ignored by EFI; no MBR code is run. The following example shows a partial printout of a Protective MBR.
000001B0: 000001C0: 000001D0: 000001E0: 000001F0: 00 02 00 00 00 00 00 00 00 00 00 EE 00 00 00 00 FF 00 00 00 00 FF 00 00 00 00 FF 00 00 00 00 01 00 00 00 00 00 00 00 00 04 00 00 00 00 06 00 00 00 00 04 FF 00 00 00 06 FF 00 00 00 00 FF 00 00 00 00 FF 00 00 00 00 00 00 00 55 00 00 00 00 AA ................ ................ ................ ................ ..............U.

The following table describes the fields in each entry in the Protective MBR. Protective MBR in GPT Disks

Byte Offset

Field Length

Sample Value1 00

Field Name and Definition

0x01BE 1 byte 0x01BF 1 byte 0x01C0 1 byte 0x01C1 1 byte 0x01C2 1 byte 0x01C3 1 byte 0x01C4 1 byte 0x01C5 1 byte 0x01C6 4 bytes 0x01CA 4 bytes

Boot Indicator. Must be set to 00 to indicate that this partition cannot be booted. 00 Starting Head. Matches the Starting LBA of the single partition. 02 Starting Sector. Matches the Starting LBA of the single partition. 00 Starting Cylinder. Matches the Starting LBA of the GPT partition. System ID. Must be EE to specify that the single partition is a GPT partition. If you move a GPT disk to a computer running Windows 2000 with Service EE Pack 1 or greater or Windows Server 2003, the partition is displayed as a GPT Protective Partition and cannot be deleted. Ending Head. Matches the Ending LBA of the single partition. If the Ending FF LBA is too large to be represented here, this field is set to FF. Ending Sector. Matches the Ending LBA of the single partition. If the FF Ending LBA is too large to be represented here, this field is set to FF. Ending Cylinder. Matches the Ending LBA of the single partition. If the FF Ending LBA is too large to be represented here, this field is set to FF. 01 00 00 Starting LBA. Always set to 1. The Starting LBA begins at the GPT partition 00 table header, which is located at LBA 1. FF FF FF Size in LBA. The size of the single partition. Must be set to FF FF FF FF if FF this value is too large to be represented here.

1. Numbers larger than one byte are stored in little endian format, or reverse-byte ordering. Little endian format is a method of storing a number so that the least significant byte appears first in the hexadecimal number notation.

GPT Partition Table Header


The GPT header defines the range of logical block addresses that are usable by partition entries. The GPT header also defines its location on the disk, its GUID, and a 32-bit cyclic redundancy check (CRC32) checksum that is used to verify the integrity of the GPT header. GPT disks use a primary and a backup GUID partition table (GPT) header:

The primary GPT header is located at LBA 1, directly after the Protective MBR. The backup GPT header is located in the last sector of the disk. No data follows the backup GPT header.

EFI verifies the integrity of the GPT headers by using a CRC32 checksum, which is a calculated value that is used to test data for the presence of errors. If the primary GPT header is corrupted, the system checks the backup GPT header checksum. If the backup checksum is valid, then the backup GPT header is used to restore the primary GPT header. This restoration process works in reverse if the primary GPT header is valid but the backup GPT header is corrupted. If both the primary and backup GPT headers are corrupted, then the 64-bit versions of Windows Server 2003 cannot access the disk. Note

Do not use disk editing tools such as DiskProbe to make changes to GPT disks because any change that you make renders the checksums invalid, which might cause the disk to become inaccessible. To make changes to GPT disks, do either of the following: Use Diskpart.efi in the firmware environment. Use Diskpart.exe or Disk Management in the 64-bit versions of Windows Server 2003.

The following example shows a partial printout of a GPT header.


00000000: 00000010: 00000020: 00000030: 00000040: 00000050: 00000060: 45 27 37 17 A1 80 00 46 6D C8 C8 F4 00 00 49 9F 11 11 04 00 00 20 C9 01 01 62 00 00 50 00 00 00 2F 80 00 41 00 00 00 D5 00 00 52 00 00 00 EC 00 00 54 00 00 00 6D 00 00 00 01 22 00 02 27 00 00 00 00 A2 00 C3 00 01 00 00 DA 00 F3 00 00 00 00 98 00 85 00 5C 00 00 9F 00 00 00 00 00 00 79 00 00 00 00 00 00 C0 00 00 00 00 00 00 01 00 00 00 EFI PART....\... 'm.............. 7......."....... .............y.. ...b/..m........ ........'....... ................

The following table describes the fields in the GPT header. GUID Partition Table Header

Byte Field Offset Length 0x00 0x08 0x0C 0x10 0x14 0x18 0x20 0x28 8 bytes 4 bytes 4 bytes 4 bytes 4 bytes 8 bytes 8 bytes 8 bytes

Sample Value1 45 46 49 20 50 41 52 54

Field Name and Definition

Signature. Used to identify all EFI-compatible GPT headers. The value must always be 45 46 49 20 50 41 52 54. Revision. The revision number of the EFI specification to which the 00 00 01 00 GPT header complies. For version 1.0, the value is 00 00 01 00. Header Size. The size, in bytes, of the GPT header. The size is 5C 00 00 00 always 5C 00 00 00 or 92 bytes. The remaining bytes in LBA 1 are reserved. CRC32 Checksum. Used to verify the integrity of the GPT header. 27 6D 9F C9 The 32-bit cyclic redundancy check (CRC) algorithm is used to perform this calculation. 00 00 00 00 Reserved. Must be 0. 01 00 00 00 00 00 Primary LBA. The LBA that contains the primary GPT header. The 00 00 value is always equal to LBA 1. 37 C8 11 01 00 00 Backup LBA. The LBA address of the backup GPT header. This 00 00 value is always equal to the last LBA on the disk. 22 00 00 00 00 00 First Usable LBA. The first usable LBA that can be contained in a

0x30 0x38 0x48 0x50 0x54 0x58 0x5C

GUID partition entry. In other words, the first partition begins at this 00 00 LBA. In the 64-bit versions of Windows Server 2003, this number is always LBA 34. 17 C8 11 01 00 00 Last Usable LBA. The last usable LBA that can be contained in a 8 bytes 00 00 GUID partition entry. 00 A2 DA 98 9F 79 Disk GUID. A unique number that identifies the partition table 16 bytes C0 01 A1 F4 04 62 header and the disk itself. 2F D5 EC 6D 02 00 00 00 00 00 Partition Entry LBA. The starting LBA of the GUID partition 8 bytes 00 00 entry array. This number is always LBA 2. Number of Partition Entries. The maximum number of partition entries that can be contained in the GUID partition entry array. In 4 bytes 80 00 00 00 the 64-bit versions of Windows Server 2003, this number is equal to 128. Size of Partition Entry. The size, in bytes, of each partition entry in 4 bytes 80 00 00 00 the GUID partition entry array. Each partition entry is 128 bytes. Partition Entry Array CRC32. Used to verify the integrity of the 4 bytes 27 C3 F3 85 GUID partition entry array. The 32-bit CRC algorithm is used to perform this calculation. 420 Reserved. Must be 0. bytes

1. Numbers larger than one byte are stored in little endian format, or reverse-byte ordering. Little endian format is a method of storing a number so that the least significant byte appears first in the hexadecimal number notation.

GUID Partition Entry Array


Similar to the partition table on MBR disks, the GUID partition entry array contains partition entries that represent each partition on the disk. The 64-bit versions of Windows Server 2003 create an array that is 16,384 bytes, so the first usable block must start at an LBA greater than or equal to 34. (LBA 0 contains the protective MBR; LBA 1 contains the GPT header; and LBAs 2 through 33 are used by the GUID partition entry array.) Each GPT disk contains two GUID partition entry arrays:

The primary GUID partition entry array is located after the GUID partition table header and ends before the first usable LBA. The backup GUID partition entry array is located after the last usable LBA and ends before the backup GUID partition table header.

A CRC32 checksum of the GUID partition entry array is stored in the GPT header. When a new partition is added, this checksum is updated in the primary and backup GUID partition entries, and then the GPT header size checksum is updated. GUID Partition Entry A GUID partition entry defines a single partition and is 128 bytes long. Because the 64-bit versions of Windows Server 2003 create a GUID partition entry array that has 16,384 bytes, you can have a maximum of 128 partitions on a basic GPT disk. Each GUID partition entry begins with a partition type GUID. The 16-byte partition type GUID, which is similar to a System ID in the partition table of an MBR disk, identifies the type of data that the partition contains and identifies how the partition is used. The 64-bit versions of Windows Server 2003 recognize

only the partition type GUIDs described in the following table, and do not mount any other type of partition. However, original equipment manufacturers (OEMs) and independent software vendors (ISVs), as well as other operating systems might define additional partition type GUIDs. Partition Type GUIDs

GUID Value Unused entry {00000000000000000000000000000000} EFI System partition {28732AC11FF8D211BA4B00A0C93EC93B} Microsoft Reserved partition {16E3C9E35C0BB84D817DF92DF00215AE} Primary partition on a basic disk {A2A0D0EBE5B9334487C068B6B72699C7} LDM Metadata partition on a dynamic disk {AAC808588F7EE04285D2E1E90434CFB3} LDM Data partition on a dynamic disk {A0609BAF3114624FBC683311714A69AD} The following example illustrates a partial hexadecimal printout of the GUID partition entry array on a basic GPT disk. This printout shows three partition entries: an EFI System partition, a Microsoft Reserved partition, and a primary partition. The partition type GUIDs are bold and match the entries in the previous table.
00000000: 00000010: 00000020: 00000030: 00000040: 00000050: 00000060: 00000070: 00000080: 00000090: 000000A0: 000000B0: 000000C0: 000000D0: 000000E0: 000000F0: 00000100: 00000110: 00000120: 00000130: 00000140: 00000150: 00000160: 00000170: 28 C0 3F 00 73 61 00 00 16 80 CD 00 6F 73 61 00 A2 C0 D1 00 63 61 00 00 73 94 00 00 00 00 00 00 E3 BC 2F 00 00 00 00 00 A0 1B 2A 00 00 00 00 00 2A 77 00 00 79 72 00 00 C9 80 03 00 73 65 72 00 D0 0B 04 00 20 72 00 00 C1 FC 00 00 00 00 00 00 E3 FC 00 00 00 00 00 00 EB 00 00 00 00 00 00 00 1F 43 00 00 73 74 00 00 5C 43 00 00 6F 72 74 00 E5 44 00 00 64 74 00 00 F8 86 00 00 00 00 00 00 0B 86 00 00 00 00 00 00 B9 86 00 00 00 00 00 00 D2 C0 00 00 74 69 00 00 B8 C0 00 00 66 76 69 00 33 C0 00 00 61 69 00 00 11 01 00 00 00 00 00 00 4D 01 00 00 00 00 00 00 44 01 00 00 00 00 00 00 BA 92 CC 45 65 74 00 00 81 50 D0 4D 74 65 74 00 87 F1 4E 42 74 74 00 00 4B E0 2F 00 00 00 00 00 7D 7B 2A 00 00 00 00 00 C0 B3 2F 00 00 00 00 00 00 3C 03 46 6D 69 00 00 F9 9E 04 69 20 64 69 00 68 12 81 61 61 69 00 00 A0 77 00 00 00 00 00 00 2D 5F 00 00 00 00 00 00 B6 71 00 00 00 00 00 00 C9 2E 00 49 20 6F 00 00 F0 80 00 63 72 20 6F 00 B7 4F 00 73 20 6F 00 00 3E 43 00 00 00 00 00 00 02 78 00 00 00 00 00 00 26 75 00 00 00 00 00 00 C9 AC 00 20 70 6E 00 00 15 F5 00 72 65 70 6E 00 99 88 00 69 70 6E 00 00 3B 40 00 00 00 00 00 00 AE 31 00 00 00 00 00 00 C7 21 00 00 00 00 00 00 (s*......K...>.; ..w.C.....<w.C.@ ?......../...... ........E.F.I. . s.y.s.t.e.m. .p. a.r.t.i.t.i.o.n. ................ ................ ....\..M.}.-.... ....C...P{._.x.1 ./.......*...... ........M.i.c.r. o.s.o.f.t. .r.e. s.e.r.v.e.d. .p. a.r.t.i.t.i.o.n. ................ ......3D..h..&.. ....D......qOu.! .*......N/...... ........B.a.s.i. c. .d.a.t.a. .p. a.r.t.i.t.i.o.n. ................ ................

Partition Type

The following table illustrates the layout of a GUID partition entry. The sample values correspond to the EFI System partition entry in the preceding example. GUID Partition Entry

Byte Field Offset Length 0x00

Sample Value1

Field Name and Definition Partition Type GUID. Identifies the type of partition. The partition type GUID in this example identifies Microsoft Reserved partitions. For a description of partition type GUIDs, see the table titled Partition Type GUIDs earlier in this section.

28 73 2A C1 1F F8 D2 11 BA 4B 16 bytes 00 A0 C9 3E C9 3B

0x 10 0x 20 0x 28

C0 94 77 FC 43 86 C0 01 92 E0 16 bytes 3C 77 2E 43 AC 40 3F 00 00 00 00 00 8 bytes 00 00 CC 2F 03 00 00 8 bytes 00 00 00

Unique Partition GUID. A unique ID created for each partition entry.

0x 30

0x 38

Starting LBA. The starting LBA of the partition that is defined by this partition entry. Ending LBA. The ending LBA of the partition that is defined by this partition entry. Attribute Bits. Describe how the partition is used. For a description of the attribute used by the 64-bit versions of Windows Server 2003, see 00 00 00 00 00 00 8 bytes the table titled GUID Partition Entry Attributes Used by the 64-Bit 00 00 Editions of Windows on Itanium-based Computers later in this section. EFI system Partition Name. A 36-character Unicode string that can be used to 72 bytes partition name the partition.

1. Numbers larger than one byte are stored in little endian format, or reverse-byte ordering. Little endian format is a method of storing a number so that the least significant byte appears first in the hexadecimal number notation. GUID Partition Entry Attributes GUID partition entry attributes are descriptors for how a partition is used. The attributes are specified within a 64-bit value, so EFI supports up to 64 different attributes. The 64-bit versions of Windows Server 2003 use two attributes as described in the following table. GUID Partition Entry Attributes Used by the 64-Bit Editions of Windows on Itanium-based Computers

Description Specifies that this partition is required for the platform to function. All original equipment Bit 0 manufacturer (OEM) partitions must have this bit set to protect the OEM partition from being overwritten by the disk tools supplied with Windows Server 2003. Bit Marks the partition as read-only. Used only for primary basic partitions of type {EBD0A0A2-B9E560 4433-87C0-68B6B72699C7}. Bit Marks the partition as hidden. Used only for primary basic partitions of type {EBD0A0A2-B9E562 4433-87C0-68B6B72699C7}. Bit Prevents the system from assigning a default drive letter to the partition. Used only for primary basic 63 partitions of type {EBD0A0A2-B9E5-4433-87C0-68B6B72699C7}.

Bits

Boot Sectors on GPT Disks


Boot sectors on GPT disks are similar to boot sectors on MBR disks, except that EFI ignores all x86 code in the boot sector. Instead, EFI uses its own file system driver to read the BIOS parameter block (BPB) and then mount the volume.

Basic Disks and Volumes Processes and Interactions


The following basic disk and volume processes and interactions assume that your computer has at least one basic disk and that the basic disk is functioning properly.

Creating a Basic Volume

When you create a basic volume, the Virtual Disk Service (VDS) uses the IOCTL, IOCTL_DISK_SET_DRIVE_LAYOUT_EX, to set the drive layout, and add a new partition. VDS calls the disk driver. Partition manager (Partmgr.sys), sits on top of the disk driver as a filter). Partition manager creates the partition device and notifies the volume manager that there is a new partition. The volume manager announces the volume device to Plug and Play and to the system. Plug and Play notifies the mount point manager (Mountmgr.sys) that a new volume has arrived and the mount point manager sets up a drive letter as long as AutoMount is enabled. Note

AutoMount is disabled by default on Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition.

What Happens When a Basic Volume Is Created

Reading or Writing to a Basic Volume


When you read or write data to basic disk or volume, the input/output (I/O) is sent through the file system to the volume manager. The volume manager creates a new I/O request packet (IRP) and sends it to the partition manager (Partmgr.sys) in the disk stack, which relays it to the disk driver. The volume manager waits for the completed IRP to return from the disk stack. Then, the volume manager completes and returns the file system IRP. The following figure illustrates the processes involved when reading or writing to a basic volume. Basic Disk and Volume Input/Output (I/O) Processes

Converting a Basic Disk into a Dynamic Disk


You can convert a basic disk to dynamic by using Disk Management or by using DiskPart, a command-line tool that provides the same functions as Disk Management. When you convert a disk to dynamic, the following events occur:

All existing primary partitions and logical drives become simple volumes. The disk joins the local disk group and receives a copy of the dynamic disk database.

Note

For certain disks, the menu command to convert the disk to dynamic is unavailable in Disk Management.

You can convert basic disks to dynamic at any time. In most cases, you do not need to restart your computer to complete the conversion. However, you must restart the computer if the disks you are converting contain any of the following volumes:

System volume (x86-based computers only). The system volume contains hardware-specific files such as Ntldr and Boot.ini. These files are needed to load Windows Server 2003 in x86-based computers. Boot volume. The boot volume contains the Windows Server 2003 operating system and its support files. In x86-based computers, the boot volume can be, but does not have to be, the same volume as the system volume. In Itanium-based computers, the boot volume is never the same volume as the EFI System partition. Volumes that contain the paging file. The paging file is a hidden file on the hard disk that Windows Server 2003 uses to hold parts of programs and data files that do not fit in memory.

Note

When you convert MBR disks that contain the system, boot, or paging file volumes to dynamic, you are prompted to restart the computer two times. You must restart the computer both times to complete the conversion.

As shown in the following figure the Disk Management snap-in identifies the system and boot volumes, as well as those that contain the paging file in the graphical and disk list views. If you have a combined system and boot volume that also contains the paging file (the most common scenario), then only System is shown. How Disk Management Identifies Separate System, Boot, and Paging File Volumes for an x86-based Computer

The list volume command in DiskPart shows the system, boot, and paging file volumes as follows:
Volume ### ---------Volume 0 Volume 1 Volume 2 Volume 3 Volume 4 Ltr --G C E F H Label ----------Fs ----NTFS NTFS NTFS NTFS Type ---------DVD-ROM Partition Partition Partition Partition Size --------0 B 2048 MB 17418 MB 4003 MB 751 MB Status --------Healthy Healthy Healthy Healthy Healthy Info -------System Boot Pagefile

Even after you convert a disk to dynamic, some types of primary partitions do not become dynamic volumes. These partitions retain their partition entries in the partition table and are shown as primary partitions in Disk Management. These partitions are:

Known OEM partitions (usually displayed in Disk Management as EISA Configuration partitions). EFI System partitions on GPT disks.

Before Converting Disks to Dynamic Converting a disk to dynamic changes the partition layout on the disk and creates the dynamic disk database. The result of these changes is increased flexibility for volume management in Windows Server 2003. However, these changes are not easily reversed, and the structure of dynamic disks is not

compatible with some operating systems. Therefore, you must consider the following issues before you convert disks to dynamic.
If the disk contains partitions displayed as Healthy (Unknown) in Disk Management

Do not convert a disk to dynamic if it contains unknown partitions created by other operating systems. Windows Server 2003 converts unrecognized partitions to dynamic, making them unreadable to other operating systems.
If a disk contains shadow copies

If you use separate disks to store the source volume and the shadow copies, and you convert the disk that contains the source volume to dynamic, the shadow copies are lost. Shadow copies are retained only if the source files and the shadow copies are stored on the same volume.
If the disk contains an OEM partition that is not at the beginning of the disk

Do not convert a disk to dynamic if it contains an OEM partition that is not at the beginning of the disk. (In Disk Management, an OEM partition usually appears as an EISA Configuration partition.) When you convert a disk to dynamic, Windows Server 2003 preserves the OEM partition only if it is the first partition on the disk. Otherwise the partition is deleted during the conversion to dynamic.
If you want to extend a dynamic volume

You can extend dynamic volumes that do not retain their partition entries in the partition table. The following volumes retain their entries in the partition table and cannot be extended:

The system volume and boot volume of the operating system that you used to convert the disk to dynamic. Any basic volume that was present on the disk when you converted the disk from basic to dynamic by using the version of Disk Management included with Windows 2000. Simple volumes on which you run the DiskPart command retain. This command adds a partition entry to the partition table. However, after you use this command, you can no longer extend the volume.

Note

The retain command adds an entry to the partition table of an MBR disk only for simple volumes that are contiguous, start at cylinder-aligned offsets, and are an integral number of cylinders in size. If a volume does not meet these requirements, the retain command fails. The following examples describe volumes on which the retain command will succeed: The simple volume is contiguous and starts at the beginning of the disk. The simple volume was present on the disk when the disk was converted to dynamic.

The only way to add more space to the system or boot volume on a dynamic disk is to back up all data on the disk, repartition and reformat the disk, reinstall Windows Server 2003, convert the disks to dynamic, and then restore the data from backup. The following volumes do not have partition entries and can be extended:

Simple volumes and spanned volumes created from unallocated space on a dynamic disk.

A basic volume that meets the following criteria:


o o

The basic volume is not the system or boot volume. The basic volume is on a disk that was converted from basic disk to dynamic disk by using Windows Server 2003.

Although striped, mirrored, or RAID-5 volumes do not have entries in the partition table, Windows Server 2003 does not support extending them. You can add more space to a striped, mirrored, or RAID-5 volume by backing up the data, deleting the volume, recreating the volume by using Windows Server 2003, and then restoring the data.
If you want to install Windows Server 2003 on a dynamic volume

You can install Windows Server 2003 only on dynamic volumes that retain their partition entries in the partition table. The only dynamic volumes listed in the partition table are the following:

The system volume and boot volume of the operating system (Windows Server 2003 or Windows 2000) that you used to convert the disk to dynamic. The system volume and boot volume can be simple or mirrored volumes. Any basic volume that was present on the disk when you used Windows 2000 to convert the disk from basic to dynamic. Simple volumes on which you run the DiskPart command retain. This command adds a partition entry to the partition table so that you can install Windows Server 2003 on the simple volume. A basic mirror set that was converted to a dynamic mirrored volume by using Windows 2000. If you break this mirrored volume into two simple volumes, you can also install Windows Server 2003 on either simple volume because they both retain their partition entries.

Because these dynamic volumes retain their partition entries, you can install Windows Server 2003 on them. However, you cannot extend any of these volumes because you can only extend volumes that do not have entries in the partition table.
If you want to access the disk by using Windows Millennium Edition or earlier, or Windows NT 4.0

If you plan to move the disk after you convert it to dynamic, note that you can access dynamic disks only from computers that are running Windows 2000, Windows XP Professional, Windows XP 64-Bit Edition, or Windows Server 2003. You cannot access dynamic disks from computers running Windows NT 4.0 or earlier. When moving disks, note that access to dynamic disks is further restricted by the partition style used on the dynamic disk:

Dynamic MBR disks. Only computers running Windows 2000, Windows XP Professional, Windows XP 64-Bit Edition, or Windows Server 2003 can access dynamic MBR disks. Dynamic GPT disks. Only Itanium-based computers running Windows XP 64-Bit Edition or the 64-bit versions of Windows Server 2003 can access dynamic GPT disks.

Note

Volumes on dynamic MBR and GPT disks are available across a network to computers running MS-DOS, Windows 95, Windows 98, Windows Millennium Edition, Windows NT 4.0 or earlier, Windows XP, and Windows Server 2003.

If a disk or computer contains multiple copies of Windows XP Professional, Windows Server 2003 or Windows 2000

Do not convert a disk to dynamic if it contains multiple copies of Windows XP Professional, Windows Server 2003, or Windows 2000. Even though these operating systems support dynamic disks, they require certain registry entries that allow them to start from dynamic disks. If the operating systems are installed on the same disk and you use one of the operating systems to convert the disk to dynamic, the registry of the other operating system becomes out-of-date because the drivers required to start the operating system from a dynamic disk are not loaded. Therefore, you can no longer start the other operating system. One way that you can use dynamic disks with Windows XP Professional, Windows Server 2003, and Windows 2000 in a multiple-boot configuration is to install each operating system to a different disk. However, startup problems can also occur if you boot from one of the operating systems and then convert the disks that contain the other operating systems to dynamic. To ensure that each operating system can start, start each operating system and then convert only the disk that contains the current boot volume to dynamic. For example, install Windows XP Professional on disk 1 and Windows Server 2003 on disk 2. Use Windows XP Professional to convert disk 1 to dynamic, and then use Windows Server 2003 to convert disk 2 to dynamic. By using this method, you ensure that the registries are updated for each boot volume. Disks That Cannot Be Converted to Dynamic Windows Server 2003 Setup and Disk Management ensure that disks initialized by Windows Server 2003 can be converted to dynamic. However, on some disks the conversion fails or the Convert to Dynamic Disk command is not available when you right-click a basic disk. The following conditions prevent you from converting a basic disk to dynamic.
Cluster disks

You cannot convert cluster disks to dynamic if they are connected to shared SCSI or Fibre Channel buses. Windows Cluster service cannot read disks that are dynamic and makes dynamic disks unavailable to programs or services that are dependent on these disk resources in the server cluster. For this reason, the option to convert these disks to dynamic is unavailable. You must use Veritas Volume Manager to use dynamic disks with Cluster service.
Removable disks

You cannot use dynamic disks on the following:


Removable media, such as Iomega Zip or Jaz disks, CDs, DVDs, or floppy disks. Disks that use universal serial bus (USB) or IEEE 1394 (also called FireWire) interfaces.

Disks with sectors larger than 512 bytes

A sector is a unit of storage on a hard disk. The majority of hard disks use 512-byte sectors. Windows Server 2003 supports converting basic disks to dynamic only if the sector size of the basic disk is 512 bytes.
GPT disks with non-contiguous partitions

If an unknown partition lies between two known partitions on a GPT disk, you cannot convert the disk to dynamic. Unknown partitions are created by operating systems or utilities that use partition type GUIDs that the 64-bit versions of Windows Server 2003 do not recognize.
MBR disks that do not have space for the dynamic disk database

An MBR disk requires 1 MB of free space at the end of the disk to be used for the dynamic disk database. Windows Server 2003 and Windows 2000 automatically reserve 1 MB or one cylinder, whichever is greater, when creating partitions on a disk, but in rare cases, disks with partitions created by other operating systems might not have this space available. If this space is not available, you cannot convert the disk to dynamic. To convert the disk to dynamic, you must back up or move the data, delete the partitions, recreate the partitions, restore the data, and then convert the disk to dynamic. By using Windows Server 2003 to create the partitions, you ensure that the necessary space is available for the dynamic disk database. This limitation does not affect GPT disks because the database is created in its own partition with space borrowed from the Microsoft Reserved partition.

Basic Disks and Volumes Tools and Settings Updated: March 28, 2003 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 In this section

Basic Disk and Volume Tools Basic Disk and Volume Registry Entries Basic Disk and Volume WMI Classes

The following tools, registry settings, and Windows Management Instrumentation (WMI) classes are associated with basic disks and volumes.

Basic Disk and Volume Tools


The following tools are associated with basic disks and volumes. Bootcfg.exe: Boot Configuration Tool
Category

Windows Server 2003 operating system tool.


Version compatibility

All versions of the Boot Configuration Tool in the Windows Server 2003 family are identical. You can use the Boot Configuration Tool to query, configure, or change settings in the Boot.ini file on your computer. Dmdiag.exe: Disk Manager Diagnostics
Category

Windows Server 2003 operating system tool.

Version compatibility

All versions of Dmdiag in the Windows Server 2003 family are identical. Dmdiag displays the following information for the computer on which it is run:

Computer name and operating system version Physical disk to disk type Mount points Logical Disk Manager (LDM) file versions Drive letter usage List of devices Symbolic links Disk partition information

Dskmgmt.msc: Disk Management


Category

Windows Server 2003 operating system tool.


Version compatibility

All versions of the Disk Management snap-in in the Windows Server 2003 family are identical. You can use the Disk Management snap-in to remotely manage disks and volumes on other computers running Windows 2000, Windows XP or Windows Server 2003 DiskPart.exe: DiskPart
Category

Windows Server 2003 operating system tool.


Version compatibility

All versions of DiskPart in the Windows Server 2003 family are identical. DiskPart includes a few 64-bit parameters that are only available on Itanium-based computers. DiskPart is a text-mode command interpreter that enables you to manage objects (disks, partitions, or volumes) by using scripts or direct input from a command prompt. Format.exe: Format
Category

Windows Server 2003 operating system tool.


Version compatibility

All versions of Format in the Windows Server 2003 family are identical.

Format prepares a volume on the specified disk to accept Windows files. FTonline.exe: Fault-tolerant Disk Mounter
Category

Windows Server 2003 support tool.


Version compatibility

FTonline enables an administrator to mount and recover files from fault-tolerant disks created in previous versions of Windows. This tool is useful if you did not upgrade your disks to dynamic disks, or failed to back up your data before installing Windows Server 2003. You can install FTonline using the Support Tools setup program located in the \Support\Tools folder on the Windows XP Professional and the Windows Server 2003 family of operating systems CDs. Mountvol.exe: Mountvol
Category

Windows Server 2003 operating system tool.


Version compatibility

All versions of Mountvol in the Windows Server 2003 family are identical. Mountvol includes a few 64-bit parameters that are only available on Itanium-based computers. Mountvol creates, deletes, or lists volume mount points. Mountvol enables you to link volumes without using drive letters. SecInspect.exe: Sector Inspector
Category

Windows Server 2003 command-line tool.


Version compatibility

SecInspect is a command-line diagnostics tool that enables administrators to view the contents of master boot records, boot sectors, and IA64 GUID partition tables. Additional features include creating hexadecimal dumps of binary files and backup/restore of sector ranges. For more information about this tool, see the Help that comes with the tool. To find this tool, see Tool Updates in Tools and Settings Collection.

Basic Disk and Volume Registry Entries


The following registry entries are associated with basic disks and volumes. For more information about the registry, see the Registry Reference in Tools and Settings Collection. The information here is provided as a reference for use in troubleshooting or verifying that the required settings are applied. It is recommended that you do not directly edit the registry unless there is no other alternative. Modifications to the registry are not validated by the registry editor or by Windows before they are applied, and as a result, incorrect values can be stored. This can result in unrecoverable errors in the system. When possible, use Group Policy or other Windows tools, such as Microsoft Management Console (MMC), to accomplish tasks rather than editing the registry directly. If you must edit the registry, use extreme caution.

The following sections describe the basic disk and volume registry entries that are listed below \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\. The headings represent the next level in the path after \Services. For example, Vds\Debuglog is equal to \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Vds\Debuglog. Vds\Debuglog
Level Registry path

\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Vds\Debuglog\Level
Version

Windows Server 2003 family. Sets the level of logging for VDS. Use Registry Editor to add a decimal value named Level. Stop and restart the service after changing. VDS Log Options

Item Logged Bitmask Number Errors 1 Warnings 2 Trace 4 Information 8 Set the value by adding the bitmask numbers of the values you want to log. For example, decimal 3 logs errors (1) and warnings (2). Decimal 9 logs errors (1) and information (8). The range for this value is 0-15. Set the value to 0xF (decimal 15) to log all items. Dmadmin\Parameters
EnableDynamicConversionFor1394 Registry path

\ HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\dmadmin\Parameters\EnableDynamic ConversionFor1394
Version

Windows Server 2003 family. Allows a user to convert an Institute of Electrical and Electronics Engineers, Inc. (IEEE) 1394 (FireWire) disk to a dynamic disk on Windows Server 2003 or earlier operating systems. Converting 1394 disks to dynamic is not a tested or supported scenario. This registry entry is provided for compatibility with a Windows 2000 beta that supported this functionality. EnableDynamicConversionFor1394 is a DWORD Value, with a range of 0-1. Default is zero. To modify, use Registry Editor. Stop and restart the service after modifying.

Basic Disk and Volume WMI Classes


The following table lists and describes the WMI classes that are associated with basic disks and volumes. WMI Classes Associated with Basic Disks and Volumes

Class Name Win32_DiskDrive Win32_DiskDrivePhyiscalMedia Win32_DiskDriveToDiskPartition Win32_DiskPartition Win32_LogicalDisk Win32_LogicalDiskToPartition Win32_MappedLogicalDisk Win32_PhysicalMedia Win32_Volume

Namespace Version Compatibility \root\cimv2 Windows NT Server 4.0 SP4 and later \root\cimv2 Windows Server 2003 family \root\cimv2 Windows NT Server 4.0 SP4 and later \root\cimv2 Windows NT Server 4.0 SP4 and later \root\cimv2 Windows NT Server 4.0 SP4 and later \root\cimv2 Windows NT Server 4.0 SP4 and later \root\cimv2 Windows Server 2003 family \root\cimv2 Windows Server 2003 family \root\cimv2 Windows Server 2003 family

For more information about these WMI classes, see the WMI SDK documentation on MSDN.

What Are Dynamic Disks and Volumes? Updated: March 28, 2003 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

What Are Dynamic Disks and Volumes?


In this section

Types of Dynamic Volumes Dynamic Disk and Volume Scenarios

Like basic disks, which are the most commonly used storage type found on computers running Microsoft Windows, dynamic disks can use the master boot record (MBR) or GUID partition table (GPT) partitioning scheme. All volumes on dynamic disks are known as dynamic volumes. Dynamic disks were first introduced with Windows 2000 and provide features that basic disks do not, such as the ability to create volumes that span multiple disks (spanned and striped volumes), and the ability to create fault tolerant volumes (mirrored and RAID-5 volumes). Dynamic disks offer greater flexibility for volume management because they use a database to track information about dynamic volumes on the disk and about other dynamic disks in the computer. Because each dynamic disk in a computer stores a replica of the dynamic disk database, Windows Server 2003 can repair a corrupted database on one dynamic disk by using the database on another dynamic disk. The location of the database is determined by the partition style of the disk.

On MBR disks, the database is contained in the last 1 megabyte (MB) of the disk.

On GPT disks, the database is contained in a 1-MB reserved (hidden) partition known as the Logical Disk Manager (LDM) Metadata partition.

All online dynamic disks in a computer must be members of the same disk group, which is a collection of dynamic disks. A computer can have only one dynamic disk group, also called the primary disk group. Each disk in a disk group stores a replica of the same dynamic disk database. A disk group uses a name consisting of the computer name plus a suffix of Dg0. The disk group name is stored in the registry. When you move dynamic disks to a computer that has existing dynamic disks, you must import the dynamic disks to merge the databases on the moved disks with the databases on the existing dynamic disks. The disk group name on a computer never changes, as long as the disk group contains dynamic disks. If you remove the last disk in the disk group or convert all dynamic disks to basic, the registry entry remains. However, if you create a dynamic disk again on that computer, a new disk group name is generated. The computer name in the disk group remains the same, but the suffix is Dg1 instead of Dg0. When you move a dynamic disk to a computer that has no dynamic disks, the dynamic disk retains its disk group name and ID from the original computer and uses them on the local computer. For more information about converting basic disks to dynamic disks, including the limitations of dynamic disks, see Converting a Basic Disk into a Dynamic Disk in How Basic Disks and Volumes Work.

Types of Dynamic Volumes


A dynamic volume is a volume that is created on a dynamic disk. Dynamic volume types include simple, spanned, and striped volumes. Windows Server 2003 also supports mirrored and RAID-5 volumes, which are fault tolerant. Fault tolerance is the ability of computer hardware or software to make sure that your data is still available, even if there is a hardware failure. Regardless of the partition style used (MBR or GPT), you can create up to 1000 dynamic volumes per disk group, although boot time increases as the number of volumes increases. The recommended number of dynamic volumes is 32 or fewer per disk group. The following sections describe the different types of dynamic volumes.

Simple Volumes
Simple volumes are the dynamic-disk equivalent of the primary partitions and logical drives found on basic disks. When creating simple volumes, keep these points in mind:

If you have only one dynamic disk, you can create only simple volumes. You can increase the size of a simple volume to include unallocated space on the same disk or on a different disk. The volume must be unformatted or formatted by using NTFS. You can increase the size of a simple volume in two ways:
o

By extending the simple volume on the same disk. The volume remains a simple volume, and you can still mirror it. By extending a simple volume to include unallocated space on other disks on the same computer. This creates a spanned volume. Note

If the simple volume is the system volume or the boot volume, you cannot extend it.

Spanned Volumes
Spanned volumes combine areas of unallocated space from multiple disks into one logical volume. The areas of unallocated space can be different sizes. Spanned volumes require two disks, and you can use up to 32 disks. When creating spanned volumes, keep these points in mind:

You can extend only NTFS volumes or unformatted volumes. After you create or extend a spanned volume, you cannot delete any portion of it without deleting the entire spanned volume. You cannot stripe or mirror spanned volumes. For more information about striped or mirrored volumes, see Striped Volumes or Mirrored Volumes later in this section. Spanned volumes do not provide fault tolerance. If one of the disks containing a spanned volume fails, the entire volume fails, and all data on the spanned volume becomes inaccessible. The reliability for a spanned volume is less than the least reliable disk in the set.

Striped Volumes
Striped volumes improve disk input/output (I/O) performance by distributing I/O requests across disks. Striped volumes are composed of stripes of data of equal size written across each disk in the volume. They are created from equally sized, unallocated areas on two or more disks. In Windows Server 2003, the size of each stripe is 64 kilobytes (KB) and cannot be changed. Striped volumes cannot be extended or mirrored and do not offer fault tolerance. If one of the disks containing a striped volume fails, the entire volume fails, and all data on the striped volume becomes inaccessible. The reliability for the striped volume is less than the least reliable disk in the set.

Mirrored Volumes
A mirrored volume is a fault-tolerant volume that provides a copy of a volume on another disk. Mirrored volumes provide data redundancy by duplicating the information contained on the volume. The two disks that make up a mirrored volume are known as mirrors. Each mirror is always located on a different disk. If one of the disks fails, the data on the failed disk becomes unavailable, but the system continues to operate by using the unaffected disk. Mirrored volumes are available only on computers running the Windows 2000 Server family or Windows Server 2003.

RAID-5 Volumes
A RAID-5 volume is a fault-tolerant volume that stripes data and parity across three or more disks. Parity is a calculated value that is used to reconstruct data if one disk fails. When a disk fails, Windows Server 2003 continues to operate by recreating the data that was on the failed disk from the remaining data and parity. RAID-5 volumes are available only on computers running the Windows 2000 Server family or Windows Server 2003.

Dynamic Disk and Volume Scenarios


To gain the maximum benefit from dynamic disks and volumes, it is best to use them in computers with more than one disk, which allows you to scale the storage to match your needs. Dynamic disks and volumes are commonly used in the following scenarios.
Create a spanned volume to increase volume size

Spanned volumes are typically created by a user who has at least two disks in their computer. If the user doesnt require the high read throughput of a striped volume or the fault tolerance offered by mirrored or RAID-5 volumes, the user can create a spanned volume. If the data volume on a users primary disk gets too full, the user can extend that volume onto a second disk in their computer. This enables the user to create a big volume that uses space on two disks. If necessary, they can extend the volume to cover up to 32 disks. If the user has some space left on their primary disk after performing configuration changes, they can use that space and combine it with space on the second disk, creating a spanned volume.
Create a spanned volume to combine two or three small disks into a large volume

If a user has two or more small disks in their computer, they can combine those disks into a single large volume by creating a spanned volume. To the user, the space spread across the disks would look and function like a single volume.
Create a striped volume to accommodate high read/write throughput

Striped volumes are typically created by the user who has at least two disks in their computer. If the user requires high read/write throughput but does not require the fault-tolerance offered by mirrored or RAID-5 volumes, the user can create a striped volume.
Create a RAID-5 volume to protect critical data

RAID-5 volumes are typically created by the user who requires fault-tolerance and who has at least three disks in their computer. If one of the disks in the RAID-5 volume fails, the data on the remaining disks, along with the parity information, can be used to recover the lost data. RAID-5 volumes are well-suited to storing data that will need to be read frequently but written to less frequently. Database applications that read randomly work well with the built-in load balancing of a RAID-5 volume.
Create a mirrored volume to protect critical data

Mirrored volumes are typically created by the user who requires fault-tolerance and who has two disks in their computer. If one disk fails, the user always has a copy of their data on the second disk. Mirrored volumes provide better write performance than RAID-5 volumes.
Create a mirror to migrate data to a larger disk

If a user has run out of room on the simple volume where they store data and there is no room left on the disk to extend the volume, they can move this data to a larger disk instead of creating a spanned volume. The user can create a mirrored volume using the simple volume containing their data and a larger disk. After creating the mirrored volume, the user can break the mirrored volume and extend the new volume to fill the larger disk, leaving a complete copy of the original volume with available space for new data. The space on the original volume can be reclaimed for other uses.

How Dynamic Disks and Volumes Work Updated: March 28, 2003 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

How Dynamic Disks and Volumes Work

In this section

Dynamic Disks and Volumes Architecture Dynamic Disk and Volume Physical Structure Dynamic Disk and Volume Processes and Interactions Types of RAID Volumes

Like basic disks which are the most commonly used storage type found on computers running Microsoft Windows, dynamic disks can use the master boot record (MBR) or GUID partition table (GPT) partitioning scheme. All volumes on dynamic disks are known as dynamic volumes. Dynamic disks were first introduced with Windows 2000 and provide features that basic disks do not, such as the ability to create volumes that span multiple disks (spanned and striped volumes), and the ability to create fault tolerant volumes (mirrored and RAID-5 volumes). Dynamic disks offer greater flexibility for volume management because they use a database to track information about dynamic volumes on the disk and about other dynamic disks in the computer. Because each dynamic disk in a computer stores a replica of the dynamic disk database, Windows Server 2003 can repair a corrupted database on one dynamic disk by using the database on another dynamic disk. An optimal environment for basic disks and volumes is defined as follows:

Windows Server 2003 operating system is installed and functioning properly. The dynamic disks are functioning properly and they display the Online status in the Disk Management snap-in. The dynamic volumes display the Healthy status in the Disk Management snap-in.

The following sections provide an in-depth view of how basic disks and volumes work in an optimal environment.

Dynamic Disk and Volume Architecture


Dynamic disks and volumes rely on the Logical Disk Manager (LDM) and Virtual Disk Service (VDS) and their associated components. These components enable you to perform tasks such as converting basic disks into dynamic disks, and creating fault-tolerant volumes. The following diagram shows the LDM and VDS components. Logical Disk Manager and Virtual Disk Service Components

The following table lists the LDM and VDS components and provides a brief description of each. Logical Disk Manager and Virtual Disk Service Components

Component Disk Management snapin: Dmdlgs.dll Dmdskmgr.dll Dmview.ocx

Description

Binaries that comprise the Disk Management snap-in user interface.

Diskmgmt.msc DiskPart command line A scriptable alternative to the Disk Management snap-in. utility:

Diskpart.exe Mount Manager command line: Mountvol.exe Virtual Disk Service: Vds.exe

A command line utility that can be used to create, delete, or list volume mount points.

A program used to configure and maintain volume and disk storage.

Vdsutil.dll Virtual Disk Service provider for basic disks The Virtual Disk Service calls into the basic provider when configuring basic disks and volumes: and volumes. Vdsbas.dll Virtual Disk Service provider for dynamic disks and volumes: Vdsdyndr.dll Dmboot.sys Dmconfig.dll Dmintf.dll Dmio.sys Dmload.sys Dmremote.exe Dmutil.dll Logical Disk Administrator service: Dmadmin.exe Logical Disk Service: Dmserver.dll Basic disk I/O driver: Ftdisk.sys Mount point manager driver: Mountmgr.sys Partition manager: Partmgr.sys A service that detects and monitors new hard disk drives and sends disk volume information to Logical Disk Manager Administrative Service for configuration. If this service is stopped, dynamic disk status and configuration information might become outdated. If this service is disabled, any services that explicitly depend on it will fail to start. A driver that manages all I/O for basic disks. Other system components, such as mount point manager, call into this driver to get information about basic disk volumes. A binary that tracks drive letters, folder mount paths and other mount points for volumes. Assigns a unique volume mount point of the form \??\Volume<GUID> to each volume, in addition to any drive letters or folder paths that have been assigned by the user. Ensures that a volume will get the same drive letter each time the computer boots, and also tries to retain a volumes drive letter when the volumes disk is moved to a new computer. A filter driver that sits on top of the disk driver. All disk driver requests pass through the partition manager driver. This driver creates partition devices and notifies the volume managers of partition arrivals and removals. Exposes IOCTLs Drivers and user mode components used to configure dynamic disks and volumes and perform I/O.

The Virtual Disk Service calls into the dynamic provider when configuring dynamic disks and volumes.

The VDS provider for dynamic disks and volumes uses the interfaces exposed by this service to configure dynamic disks.

that return information about partitions to other components, and allow partition configuration.

Dynamic Disk and Volume Physical Structure


Dynamic disks can use either the master boot record (MBR) or GUID partition table (GPT) partitioning style. x86-based computers use disks with the MBR partitioning style and Itanium-based computers use disks with the GPT partitioning style. The following diagram compares a dynamic MBR disk to a dynamic GPT disk. Comparison of Dynamic MBR and GPT Disks

The following table compares dynamic MBR and GPT disks. Comparison of Dynamic MBR and GPT Disks

Characteristic MBR Disk(x86-based Computer) Number of Supports up to 1000 volumes per disk

GPT Disk(Itanium-based Computer) Supports up to 1000 volumes per disk group.

volumes on dynamic disks

group. Can be read by:

Can be read by:


Windows 2000, all versions Windows XP Windows Server 2003, all versions for x86-based computers and Itanium-based computers

Windows XP 64-Bit Edition The 64-bit version of Windows Server 2003, Enterprise Edition The 64-bit version of Windows Server 2003, Datacenter Edition

Compatible operating systems

Maximum size of dynamic volumes

Supports the maximum volume size of the Supports the maximum volume size of the file system used to format the volume. file system used to format the volume.

Up to 64 terabytes for a striped or spanned Up to 64 terabytes for a striped or spanned volume using 32 disks. volume using 32 disks. Contains primary and backup partition tables Partition tables Contains one copy of the partition table. for redundancy and checksum fields for (copies) improved partition structure integrity. Stores data in partitions and in Stores user and program data in partitions unpartitioned space. Although most user that are visible to the user. Stores system and program data is stored within metadata that is critical to platform operation Locations for data partitions, some system metadata might be in partitions that the 64-bit versions of storage stored in hidden or unpartitioned sectors Windows Server 2003 recognize but do not created by OEMs or other operating make visible to the user. Does not store any systems. data in unpartitioned space. Uses tools designed for GPT disks. (Do not Troubleshooting Uses the same methods and tools used in use MBR troubleshooting tools on GPT methods Windows 2000. disks.)

Dynamic Disk and Volume Processes and Interactions


The following section discusses the different processes that are used by dynamic disks and volumes and discusses the ways in which those processes interact. This section assumes that your computer has at least three dynamic disks and that the dynamic disk is functioning properly. Creating a simple volume Creating a simple volume involves the Virtual Disk Service (VDS) and Logical Disk Manager (LDM). A simple volume is a dynamic volume made up of disk space from a single dynamic disk. When you use Disk Management or DiskPart to create a simple volume, they call the VDS API, which sends an IOCTL to the Volume Manager to create the volume. Volume Manager creates the simple volume and documents it in the dynamic disks database. Next, Volume Manager sends information about the new volume to Plug and Play. Plug and Play sends information about the new volume to the Mount Manager and to VDS. Mount Manager assigns a drive letter to the volume and after VDS sends the information to Disk Management or DiskPart, the volume is available and ready for use. What Happens When a Simple Volume Is Created

Partition Entries on MBR Dynamic Disks


Like basic disks, dynamic disks contain an MBR that includes the master boot code, the disk signature, and the partition table for the disk. However, the partition table on a dynamic disk does not contain an entry for each volume on the disk because volume information is stored in the dynamic disk database. Instead, the partition table contains entries for the system volume, boot volume (if it is not the same as the system volume), and one or more additional partitions that cover all the remaining unallocated space on the disk. All these partitions use System ID 0x42, which indicates that these partitions are on a dynamic disk. Placing these partitions in the partition table prevents MBR-based disk utilities from interpreting the space as available for new partitions. Note

In Windows 2000, the partition entries for existing basic volumes were preserved in the partition table when the disk was converted to dynamic. These entries prevented the converted dynamic volumes from being extended. This limitation has been removed from Windows Server 2003 for all converted volumes except the boot and system volumes. Partition entries for all other converted volumes are removed from the partition table, and therefore these volumes can be extended.

The following example shows a partial printout of an MBR on a dynamic disk that contains four simple volumes: the system volume, the boot volume, and two data volumes. Note, however, that the partition table contains entries for only three partitions. The first entry is the system volume, which is marked as active. The second entry is the boot volume, and the third entry is the container partition for the two data volumes on the disk. All entries are type 0x42, which specifies dynamic volumes.
000001B0: 000001C0: 000001D0: 000001E0: 000001F0: 01 41 C1 00 00 05 03 00 42 42 42 00 FE FE FE 00 7F FF FF 00 04 02 FF 00 3F C5 43 00 00 FA FF 00 00 3F BC 00 00 00 00 00 86 7E 58 00 FA 04 53 00 3F 7D 54 00 00 00 00 00 80 01 00 00 00 00 00 00 55 AA .....,Dc!.!..... ..B..?.....?... A.B.....?.~.}... ..B...C...XST... ..............U.

Partition Entries on Dynamic GPT Disks

The following example illustrates a partial hexadecimal printout of a GUID partition entry array on a dynamic GPT disk. The GUID partition entry array shows the Microsoft Reserved partition plus additional entries that appear only on dynamic GPT disks:

The LDM Metadata partition is a 1-megabyte hidden partition that stores the dynamic disk database, which contains information about all dynamic disks and volumes installed on the computer. The LDM Data partition acts as a container for dynamic volumes. Individual dynamic volumes do not contain entries in the GUID partition entry array.

The partition type GUIDs are bold and match the entries in the table titled Partition Type GUIDs later in this section.
00000000: 00000010: 00000020: 00000030: 00000040: 00000050: 00000060: 00000070: 00000080: 00000090: 000000A0: 000000B0: 000000C0: 000000D0: 000000E0: 000000F0: 00000100: 00000110: 00000120: 00000130: 00000140: 00000150: 00000160: 00000170: 16 31 22 00 6F 73 61 00 AA 66 22 00 6D 20 6F 00 A0 E2 22 00 64 74 00 00 E3 C3 08 00 00 00 00 00 C8 F2 00 00 00 00 00 00 60 33 00 00 00 00 00 00 C9 97 00 00 73 65 72 00 08 3F 00 00 65 70 6E 00 9B A2 01 00 61 69 00 00 E3 A6 00 00 00 00 00 00 58 3A 00 00 00 00 00 00 AF 82 00 00 00 00 00 00 5C A4 00 00 6F 72 74 00 8F 09 00 00 74 61 00 00 31 3A 00 00 74 74 00 00 0B 9F 00 00 00 00 00 00 7E D9 00 00 00 00 00 00 14 5E 00 00 00 00 00 00 B8 1D 00 00 66 76 69 00 E0 EA 00 00 61 72 00 00 62 D5 00 00 61 69 00 00 4D 44 00 00 00 00 00 00 42 49 00 00 00 00 00 00 4F 4C 00 00 00 00 00 00 81 85 21 4D 74 65 74 00 85 B1 21 4C 64 74 00 00 BC AE 09 4C 20 6F 00 00 7D 61 00 00 00 00 00 00 D2 32 08 00 00 00 00 00 68 8E 77 00 00 00 00 00 F9 15 01 69 20 64 69 00 E1 75 00 44 61 69 00 00 33 4B 11 44 70 6E 00 00 2D 49 00 00 00 00 00 00 E9 D5 00 00 00 00 00 00 11 EC 01 00 00 00 00 00 F0 4A 00 63 72 20 6F 00 04 98 00 4D 74 74 00 00 71 6B 00 4D 61 00 00 00 02 E9 00 00 00 00 00 00 34 04 00 00 00 00 00 00 4A 76 00 00 00 00 00 00 15 7C 00 72 65 70 6E 00 CF 3C 00 20 61 69 00 00 69 4D 00 20 72 00 00 00 AE 24 00 00 00 00 00 00 B3 34 00 00 00 00 00 00 AD ED 00 00 00 00 00 00 ....\..M.}.-.... 1......D.a.IJ.|$ ".......!....... ........M.i.c.r. o.s.o.f.t. .r.e. s.e.r.v.e.d. .p. a.r.t.i.t.i.o.n. ................ ...X.~.B.....4.. f.?:...I.2u...<4 ".......!....... ........L.D.M. . m.e.t.a.d.a.t.a. .p.a.r.t.i.t.i. o.n............. ................ .`..1.bO.h3.qJi. .3..:^.L..K.kvM. "........w...... ........L.D.M. . d.a.t.a. .p.a.r. t.i.t.i.o.n..... ................ ................

Partition Type GUIDs

GUID Value Unused entry {00000000000000000000000000000000} EFI System partition {28732AC11FF8D211BA4B00A0C93EC93B} Microsoft Reserved partition {16E3C9E35C0BB84D817DF92DF00215AE} Primary partition on a basic disk {A2A0D0EBE5B9334487C068B6B72699C7} LDM Metadata partition on a dynamic disk {AAC808588F7EE04285D2E1E90434CFB3} LDM Data partition on a dynamic disk {A0609BAF3114624FBC683311714A69AD}

Partition Type

GUID Partition Entry Attributes


GUID partition entry attributes are descriptors for how a partition is used. The attributes are specified within a 64-bit value, so EFI supports up to 64 different attributes. The 64-bit versions of Windows Server 2003 use two attributes as described in the following table. GUID Partition Entry Attributes Used by the 64-Bit Editions of Windows on Itanium-based Computers

Description Specifies that this partition is required for the platform to function. All original equipment Bit 0 manufacturer (OEM) partitions must have this bit set to protect the OEM partition from being overwritten by the disk tools supplied with Windows Server 2003. Bit Marks the partition as read-only. Used only for primary basic partitions of type {EBD0A0A2-B9E560 4433-87C0-68B6B72699C7}. Bit Marks the partition as hidden. Used only for primary basic partitions of type {EBD0A0A2-B9E562 4433-87C0-68B6B72699C7}. Bit Prevents the system from assigning a default drive letter to the partition. Used only for primary basic 63 partitions of type {EBD0A0A2-B9E5-4433-87C0-68B6B72699C7}.

Bits

Types of RAID Volumes


A redundant array of independent disks (RAID) is a fault-tolerant disk configuration in which part of the physical storage capacity contains redundant information about data stored on the disks. The redundant information is either parity information (in the case of a RAID-5 volume), or a complete, separate copy of the data (in the case of a mirrored volume). The redundant information enables regeneration of the data if one of the disks or the access path to it fails, or a sector on the disk cannot be read. Windows Server 2003 supports three types of software RAID configurations:

Striped volumes use RAID-0, which stripes data across multiple disks. RAID-0 does not offer fault tolerance, but it does offer increased performance. Mirrored volumes use RAID-1, which provides redundancy by creating two identical copies of a volume. RAID-5 volumes use RAID-5, which stripes parity information across multiple disks. This parity information can be used to recreate data stored on a failed disk.

Note

Fault tolerance is never an alternative to performing regular backups.

Use DiskPart or Disk Management to configure and repair mirrored volumes and RAID-5 volumes. The following figure shows a mirrored and a RAID-5 volume with failed redundancy status. The mirrored and RAID-5 volumes are in failed redundancy because one of the disks that makes up the volumes is offline. Mirrored and RAID-5 Volumes That Have Failed Redundancy Status

Striped Volumes
Striped volumes improve input/output (I/O) performance by distributing I/O requests across two or more disks. Striped volumes are composed of stripes of data of equal size written across each disk in the volume. They are created from equally sized, unallocated areas on two or more disks. For Windows Server 2003, the size of each stripe is 64 kilobytes (KB). The disks in a striped volume do not need to be identical, but there must be unused space available on each disk that you want to include in the volume. You cannot increase the size of a striped volume after it is created. To change the size of a striped volume, you must first complete the following steps: 1. Back up the data. 2. Delete the striped volume by using Disk Management or DiskPart. 3. Create a new, larger, striped volume by using Disk Management or DiskPart. 4. Restore the data to the new striped volume. Striped volumes do not contain redundant information. Therefore, the cost per gigabyte on a striped volume is identical to that for the same amount of storage configured from a contiguous area on a single disk. If one

disk fails, the entire striped volume fails and no data can be recovered. The reliability for the striped volume is less than the least reliable disk in the set. Striped volumes are used for performance reasons. In general, striped volumes work well when you need to distribute disk I/O operations. Access to the data on a striped volume is usually faster than access to the same data would be on a single disk, because the I/O is spread across more than one disk. Therefore, Windows Server 2003 can be seeking on more than one disk at the same time and can have simultaneous read or write operations. A striped volume works well in the following situations:

When users need rapid read or write access to large databases or other data structures. When collecting data from external sources at very high transfer rates. This is especially useful when collection is done asynchronously. When multiple independent applications require access to data stored on the striped volume. When the operating system supports asynchronous multithreading, which helps with load balancing of disk read and write operations.

Mirrored Volumes
A mirrored volume provides an identical twin of the selected volume. All data written to the mirrored volume is written to both volumes, which results in disk capacity of only 50 percent. Because dual-write operations can degrade system performance, many mirrored volume configurations use duplexing, which means that each disk in the mirrored volume resides on its own disk controller. The benefit of duplexing is that you reduce the risk of a single point of failure: if one disk controller fails, the other controller (and the disk on that controller) continues to operate normally. If you do not use two controllers, a failed controller makes both volumes in a mirrored volume inaccessible until the controller is replaced. Note

If one disk in a mirrored volume fails, the computer continues to run and the mirrored volume is still accessible. However, the mirrored volume is no longer fault-tolerant, so you need to replace the failed disk or controller as soon as possible. If your computer supports hot-swappable hard disks, you do not need to restart the computer to install a new disk and resynchronize the mirror.

Almost any volume can be mirrored, including the system and boot volumes. However, you cannot mirror the EFI System partition on GPT disks. In addition, you cannot add disk space to a mirrored volume to increase the size of the volume later. Advantages of Mirrored Volumes Random disk-read operations on a mirrored volume are more efficient than random disk-read operations on a single volume. Windows Server 2003 has the capacity to load balance read operations across the disks. With current SCSI and Fibre Channel technology, two disk read operations can be done simultaneously. When one of the volumes that makes up a mirrored volume fails, the mirrored volume is said to have lost redundancy. A mirrored volume that loses redundancy impacts system performance the least, because the remaining disk contains all of the data. No data recomputation is needed to run the system. When you configure your boot volume on a mirrored volume, you do not have to reinstall Windows Server 2003 to restart the computer after a disk failure. When compared to a RAID-5 volume, a mirrored volume:

Has a lower entry cost because it requires only two disks, whereas a RAID-5 volume requires three or more disks. Requires less system memory. Provides good overall performance. Does not degrade performance during a failure except when high-volume read operations are performed. However, if a single write error occurs, redundancy is lost.

A mirrored volume works well in the following situations:

When extremely high data reliability is required. A duplexed mirrored volume has the best data reliability because the entire I/O subsystem is duplicated. When you have heavy write loads that need fault tolerance. In this case, mirrored volumes perform better than RAID-5 volumes. When simplicity is important. Mirrored volumes are simple to understand and easy to set up.

Disadvantages of Mirrored Volumes Disk-write operations on mirrored volumes are less efficient because data must be written to both disks. This performance penalty is minor, however, because writing to both disks usually takes place concurrently. In many situations, an end-user application is not affected by data being written do both disks. Another performance penalty occurs when you resynchronize a mirrored volume. Resynchronization is the process by which a mirrored volumes mirrors are made to contain identical data. During resynchronization, performance is affected because the computer is performing many I/O operations to copy the data. Mirrored volumes are the least efficient at maximizing storage space. Because the data is duplicated, the space requirements for a mirrored volume are higher than for a RAID-5 volume. Best Practices for Configuring Mirrored Volumes To a large extent, how you configure your mirrored volumes depends on the number of disks and controllers that you want to have on the computer running Windows Server 2003. The following are general guidelines for configuring mirrored volumes:

Keep data volumes separate from boot volumes for better performance. Configuring your boot volume on a disk (and controller) that does not contain data sets gives you better performance. Do not put the paging file on a mirrored volume. The paging file does not need to be redundant and can decrease the mirrored volumes performance due to frequent disk-writing operations. Instead, put the paging file on a striped or simple volume. For additional protection, put each disk in a mirrored volume on its own disk controller. When you use a mirrored volume for your system or boot volumes, you can make the configuration more faulttolerant by putting each disk member of the mirrored volume on a separate controller. This approach allows you to survive controller or disk failures. Putting each disk member of the mirrored volume on a separate channel of a multichannel controller does not make the controller fault tolerant. However, this approach might improve performance. Use identical disks when putting the system or boot volume on a mirrored volume. Although it is not necessary to use identical disks or to have the same volumes on each disk, it is strongly

recommended that you use identical disks and controllers if you put your system and boot volume on a mirrored volume. Note

After you mirror your system volume, you must test your configuration by starting Windows Server 2003 from each volume to ensure that you can start Windows Server 2003 if one of the disks fails. Startup problems can occur if the disks use different geometries or if the system volumes are at different offsets on the disk.

Creating Mirrored Volumes To create a mirrored volume, use the Disk Management snap-in or the DiskPart command-line tool. You can create a mirrored volume in two ways:

Add a mirror to an existing simple volume on a dynamic disk. You must have an area of unused space on a different dynamic disk at least as large as the original simple volume. If you do not have a dynamic disk with enough unallocated space, the Add Mirror command is unavailable. Create a new mirrored volume from unallocated space on two dynamic disks. The amount of disk space used for each half of the mirrored volume must be equal. If you have less unallocated space on one disk than the other, the mirrored volume can be no larger than the smaller of the two unallocated spaces.

In either case, if you have unallocated space left over, you can use the space to create other volumes. Mirroring the System and Boot Volumes in x86-based Computers To ensure that your x86-based computer can load Windows Server 2003 if one of the disks or controllers fails, you can mirror the system and boot volumes.

Mirroring the system volume makes an exact copy of the volume that contains the hardwarespecific files needed to load Windows Server 2003. Mirroring the boot volume makes an exact copy of the volume that contains the Windows Server 2003 operating system.

The boot and system volumes can be separate volumes on the same disk, separate volumes on different disks, or they can be the same volume on the same disk. In addition, the system and boot volumes can be mirrored to a different disk on the same controller or to a different disk on a different controller. The following figure illustrates some common configurations for mirroring system and boot volumes. Common Configurations of Mirrored System Volumes and Boot Volumes (x86-based Computers)

Guidelines for Mirroring System or Boot Volumes in x86-based Computers

Before you mirror the system or boot volume in an x86-based computer, note the following guidelines:

Use care when selecting Advanced Technology Attachment (ATA) disks for a mirrored system volume. Although using ATA disks is supported, the recovery procedure is more complicated when the master disk on the primary integrated device electronics (IDE) channel fails. In this case, you must move the disk with the remaining mirror to the primary IDE channel and set its jumper to master position. Do not mirror the system volume by using an ATA disk with a SCSI disk because startup problems can occur if one of the disks fails. If you use duplexed SCSI controllers, make sure to use identical controllers from the same manufacturer. You must test the mirrored system volume before a failure to ensure that the computer can start from the remaining mirror.

Mirroring the Boot Volume and Replicating the EFI System Partition in Itanium-based Computers To ensure that your Itanium-based computer can load Windows Server 2003 if one of the disks or controllers fails, you can mirror the boot volume. Mirroring the boot volume makes an exact copy of the volume that contains the Windows Server 2003 operating system. In addition to mirroring the boot volume, you should also replicate the EFI system partition. If you do not replicate the EFI system partition, and the disk holding it fails, you will not be able to boot the computer, even if there is a good, remaining boot volume on another disk. The process to replicate the EFI system partition involves creating a new EFI system partition on a second GUID partition table (GPT) disk. Because the second EFI system partition is empty, you must copy the contents from original EFI system partition into the second EFI system partition. After replicating the EFI system partition, you must use Bootcfg.exe to add the appropriate boot entries to the NVRAM to point to the copy of the EFI system partition you placed on the second disk. Later, if you make any changes to the original EFI system partition, you must manually replicate those changes in the second EFI system partition. The boot volume and the EFI system partition can be on the same disk, or they can be the on different disks. The following figure illustrates the most common configuration.

Common Configurations of Mirrored Boot Volumes and Replicated EFI System Partitions (Itaniumbased Computers)

Guidelines for Mirroring the Boot Volume and Replicating the EFI System Partition in Itanium-based Computers

Before you mirror the boot volume or replicate the EFI system partition in an Itanium-based computer, you must note the following guidelines:

Use care when selecting Advanced Technology Attachment (ATA) disks for a replicated EFI sys partition. Although using ATA disks is supported, the recovery procedure is more complicated when the master disk on the primary integrated device electronics (IDE) channel fails. In this case, you must move the disk with the remaining EFI system partition to the primary IDE channel and set its jumper to master position. Do not replicate the EFI system partition by using an ATA disk with a SCSI disk, because startup problems can occur if one of the disks fails. If you use duplexed SCSI controllers, make sure to use identical controllers from the same manufacturer. You must test the mirrored boot volume and replicated EFI system partition before a failure, to ensure that the computer can start from the remaining boot volume and EFI system partition.

RAID-5 Volumes
Using three or more disks, a RAID-5 volume dedicates the equivalent of the space of one disk in the RAID5 volume for storing parity stripes, but distributes the parity stripes across all the disks in the group. The data and parity information are arranged on the volume so that they are always on different disks. Implementing a RAID-5 volume requires a minimum of three disks. The disks do not need to be identical, but there must be equally sized blocks of unallocated space available on each disk in the set. The disks can be on the same or different controllers. However, neither the system volume nor boot volume can be on a RAID-5 volume. Note

As with striped volumes, you cannot add disks to a RAID-5 volume if you need to increase the size of the volume later.

If one of the disks in a RAID-5 volume fails, none of the data is lost. When a read operation requires data from the failed disk, the system reads all of the remaining good data stripes in the stripe and the parity stripe. Each data stripe is subtracted (by using XOR) from the parity stripe; the order is not important. The result is the missing data stripe.

When the system needs to write a data stripe to a disk that has failed, it reads the data stripes on the other disks. The system uses the data stripes on the remaining disks to calculate the parity. Because the data stripe on the failed disk is unavailable, it is not written; the system only updates the parity stripe. Advantages of RAID-5 Volumes RAID-5 volumes work well for storing data that will need to be read frequently but written to less frequently and also work well in the following situations:

In large query or database mining applications where reads occur much more frequently than writes. Performance degrades as the percentage of write operations increases. Database applications that read randomly work well with the built-in load balancing of a RAID-5 volume. Where a high degree of fault tolerance is required without the expense (incurred by the additional disk space required) of a mirrored volume. A RAID-5 volume is significantly more efficient than a mirrored volume when larger numbers of disks are used. The space required for storing the parity information is equivalent to 1/Number of disks, so a 10-disk array uses 1/10 of its capacity for parity information. The disk space that is used for parity decreases as the number of disks in the array increases.

Disadvantages of RAID-5 Volumes RAID-5 volumes are not well suited for most write-intensive workloads because a single write is likely to generate a two disk reads (one to read the old data and one to read the old parity information) and two writes (one to update the data and a second to update the parity information). For example, a RAID-5 volume is not well suited for the following situations:

For hosting applications that require high-speed data collection. This type of application requires continuous high-speed disk writes, which do not work well with the asymmetrical I/O balance inherent in RAID-5 volumes and the extra I/Os required to write the parity stripe. In transaction-processing database applications in which records are continually updated, such as in financial applications where balances are frequently updated.

If a disk that is part of a RAID-5 volume fails, read operations for data stripes on that disk are substantially slower than for a single disk. The software has to read all of the other disks in the set to calculate the data. A RAID-5 volume requires more system memory than a mirrored volume. In addition, regenerating a RAID-5 volume negatively impacts performance more than regenerating a mirrored volume does. Guidelines for Configuring RAID-5 Volumes When configuring a RAID-5 volume, buy disks based on:

Performance. RAID-5 performance improves for each disk that you use. Percentage of usable storage. The space lost to parity information decreases with each additional disk. Cost per gigabyte. A RAID-5 volume requires a minimum of three disks. Buying three large disks might cost less per gigabyte, but buying six smaller disks results in better performance and more available disk space because less space is used for parity.

The following table compares two RAID-5 configurations that provide the same disk capacities. The configuration that uses six disks is the more efficient storage solution in terms of capacity and performance.

Comparison of Two RAID-5 Configurations

Comparative Feature 3 Disks at 36.4 GB Each 6 Disks at 18.2 GB Each Total capacity 109.2 GB 109.2 GB Spaced used for parity 36.0 GB 18.6 GB Available disk space 73.2 GB 90.6 GB Follow these guidelines for configuring RAID-5 volumes:

Do not configure your system volume or your boot volume on a RAID-5 volume. In addition, keep the RAID-5 volume on a different controller and disk than your system and boot volume. Using separate controllers improves performance and can accelerate recovery from hardware failures. Do not put the paging file on a RAID-5 volume. The paging file does not need to be redundant and can decrease the RAID-5 volumes performance due to frequent disk-writing operations. Instead, put the paging file on a striped or simple volume.

Fault-Tolerant Hardware and Software


You can create a RAID-5 volume using hardware- or software-based solutions. With hardware-based RAID, an intelligent disk controller handles the creation and regeneration of redundant information on the disks that make up the RAID-5 volume. The Windows Server 2003 family of operating systems provides software-based RAID, where the creation and regeneration of redundant information on the disks in the RAID-5 volume is handled by the Logical Disk Manager (LDM). In either case, data is stored across all members in the disk array. In general, hardware-based RAID offers performance advantages over software-based RAID because hardware-based RAID incurs no overhead on the system processor. For example, you can improve data throughput significantly by implementing RAID-5 through hardware that does not use system software resources. Read and write performance and total storage size can be further improved by using multiple disk controllers. Some hardware-based RAID arrays support hot swapping, which enables you to replace a failed disk or controller while the computer is still running Windows Server 2003. Consider the following points when you evaluate a fault-tolerant hardware or software solution:

Hardware fault tolerance provides better performance. Hardware fault tolerance offers features such as hot sparing, in which additional disks are attached to the controller and left in standby mode. If a failure occurs, the controller uses one of the spare disks to replace the bad disk. Software fault tolerance is less expensive.

Regardless of whether you implement fault tolerance by using hardware, software, or both, implementing fault tolerance does not reduce the need for backups. Dynamic Disks and Volumes Tools and Settings Updated: March 28, 2003 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

Dynamic Disks and Volumes Tools and Settings


In this section

Dynamic Disk and Volume Tools Dynamic Disk and Volume Registry Entries Dynamic Disk and Volume WMI Classes

The following tools, registry settings, and Windows Management Instrumentation (WMI) classes are associated with dynamic disks and volumes.

Dynamic Disk and Volume Tools


The following tools are associated with dynamic disks and volumes. Bootcfg.exe: Boot Configuration Tool
Category

Windows Server 2003 operating system tool.


Version compatibility

All versions of the Boot Configuration Tool in the Windows Server 2003 family are identical. You can use the Boot Configuration Tool to query, configure, or change settings in the Boot.ini file on your computer. Dmdiag.exe: Disk Manager Diagnostics
Category

Windows Server 2003 operating system tool.


Version compatibility

All versions of Dmdiag in the Windows Server 2003 family are identical. Dmdiag displays the following information for the computer on which it is run:

Computer name and operating system version Physical disk to disk type Mount points Logical Disk Manager (LDM) file versions Drive letter usage List of devices Symbolic links Disk partition information

Dskmgmt.msc: Disk Management


Category

Windows Server 2003 operating system tool.


Version compatibility

All versions of the Disk Management snap-in in the Windows Server 2003 family are identical. You can use the Disk Management snap-in to remotely manage disks and volumes on other computers running Windows 2000, Windows XP or Windows Server 2003 DiskPart.exe: DiskPart
Category

Windows Server 2003 operating system tool.


Version compatibility

All versions of DiskPart in the Windows Server 2003 family are identical. DiskPart includes a few 64-bit parameters that are only available on Itanium-based computers. DiskPart is a text-mode command interpreter that enables you to manage objects (disks, partitions, or volumes) by using scripts or direct input from a command prompt. Format.exe: Format
Category

Windows Server 2003 operating system tool.


Version compatibility

All versions of Format in the Windows Server 2003 family are identical. Format prepares a volume on the specified disk to accept Windows files. FTonline.exe: Fault-tolerant Disk Mounter
Category

Windows Server 2003 support tool.


Version compatibility

FTonline enables an administrator to mount and recover files from fault-tolerant disks created in previous versions of Windows. This tool is useful if you did not upgrade your disks to dynamic disks, or failed to back up your data before installing Windows Server 2003. You can install FTonline using the Support Tools setup program located in the \Support\Tools folder on the Windows XP Professional and the Windows Server 2003 family of operating systems CDs. Mountvol.exe: Mountvol
Category

Windows Server 2003 operating system tool.


Version compatibility

All versions of Mountvol in the Windows Server 2003 family are identical. Mountvol includes a few 64-bit parameters that are only available on Itanium-based computers. Mountvol creates, deletes, or lists volume mount points. Mountvol enables you to link volumes without using drive letters. SecInspect.exe: Sector Inspector
Category

Windows Server 2003 Resource Kit tool.


Version compatibility

SecInspect is a command-line diagnostics tool that enables administrators to view the contents of master boot records, boot sectors, and IA64 GUID partition tables. Additional features include creating hexadecimal dumps of binary files and backup/restore of sector ranges. For more information about this tool, see the Help that comes with the tool. To find this tool, see Resource Kit Tool Updates in Tools and Settings Collection.

Dynamic Disk and Volume Registry Entries


The following registry entries are associated with dynamic disks and volumes. For more information about the registry, see the Registry Reference in Tools and Settings Collection. The information here is provided as a reference for use in troubleshooting or verifying that the required settings are applied. It is recommended that you do not directly edit the registry unless there is no other alternative. Modifications to the registry are not validated by the registry editor or by Windows before they are applied, and as a result, incorrect values can be stored. This can result in unrecoverable errors in the system. When possible, use Group Policy or other Windows tools, such as Microsoft Management Console (MMC), to accomplish tasks rather than editing the registry directly. If you must edit the registry, use extreme caution. The following sections describe the dynamic disk and volume registry entries that are listed below \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\. The headings represent the next level in the path after \Services. For example, Vds\Debuglog is equal to \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Vds\Debuglog. Vds\Debuglog
Level Registry path

\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Vds\Debuglog\Level
Version

Windows Server 2003 family. Sets the level of logging for VDS.

Use Registry Editor to add a decimal value named Level. Stop and restart the service after changing. VDS Log Options

Item Logged Bitmask Number Errors 1 Warnings 2 Trace 4 Information 8 Set the value by adding the bitmask numbers of the values you want to log. For example, decimal 3 logs errors (1) and warnings (2). Decimal 9 logs errors (1) and information (8). The range for this value is 0-15. Set the value to 0xF (decimal 15) to log all items. Dmadmin\Parameters
EnableDynamicConversionFor1394 Registry path

\ HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\dmadmin\Parameters\EnableDynamic ConversionFor1394
Version

Windows Server 2003 family. Allows a user to convert a Institute of Electrical and Electronics Engineers, Inc. (IEEE) 1394 (FireWire) disk to a dynamic disk on Windows Server 2003 or earlier operating systems. Converting 1394 disks to dynamic is not a tested or supported scenario. This registry entry is provided for compatibility with a Windows 2000 beta that did support this functionality. EnableDynamicConversionFor1394 is a DWORD Value, with a range of 0-1. Default is zero. To modify, use Registry Editor. Stop and restart the service after modifying.

Dynamic Disk and Volume WMI Classes


The following table lists and describes the WMI classes that are associated with dynamic disks and volumes. WMI Classes Associated with Dynamic Disks and Volumes

Class Name Win32_DiskDrive Win32_DiskDrivePhyiscalMedia Win32_DiskDriveToDiskPartition Win32_DiskPartition Win32_LogicalDisk Win32_LogicalDiskToPartition

Namespace Version Compatibility \root\cimv2 Windows NT Server 4.0 SP4 and later \root\cimv2 Windows Server 2003 family \root\cimv2 Windows NT Server 4.0 SP4 and later \root\cimv2 Windows NT Server 4.0 SP4 and later \root\cimv2 Windows NT Server 4.0 SP4 and later \root\cimv2 Windows NT Server 4.0 SP4 and later

Win32_MappedLogicalDisk Win32_PhysicalMedia Win32_Volume

\root\cimv2 Windows Server 2003 family \root\cimv2 Windows Server 2003 family \root\cimv2 Windows Server 2003 family

For more information about these WMI classes, see the WMI SDK documentation on MSDN.

You might also like