LVM Hpux
LVM Hpux
LVM Hpux
Table of Contents
1. Basic Tasks..................................................................
................................................................................
...................... 1
2. Recognizing and Initializing in LVM a newly added Disk / LUN (Discovery / Res
can) 2
3. Removing a Physical Volume...................................................
.............................................................................. 2
4. Creating a Volume Group......................................................
................................................................................
... 3
5. Adding a Disk to a Volume Group..............................................
........................................................................ 4
6. Removing a Disk from a Volume Group..........................................
............................................................... 5
7. Removing a Volume Group......................................................
................................................................................
5
8. Creating a Logical Volume and Mounting its File System.......................
........................................... 6
9. Extending a Logical Volume and its FileSystem................................
....................................................... 8
10. Reducing a Logical Volume and its FileSystem................................
..................................................... 9
11. Adding a Disk to a Volume Group and Creating a Logical Volume...............
.......................... 9
12. Adding a Disk, Creating a Volume Group and Creating a Logical Volume........
............. 10
13. Adding a Disk to a Volume Group and Extending a Logical Volume..............
.................... 11
14. Adding a LUN / External Disk, Extending the Volume Group and the Logical Vol
ume 11
15. Importing and Exporting Volume Groups.......................................
.......................................................... 12
16. Removing a Logical Volume...................................................
........................................................................... 13
17. Moving Disks Within a System (LVM Configuration with Persistent Device Files
).. 13
18. Moving Disks Within a System (LVM Configuration with Legacy Device Files)...
..... 14
19. Moving Disks Between Systems................................................
.................................................................. 14
20. Moving Data to a Different Physical Volume..................................
...................................................... 15
21. Replacing a Mirrored Non-Boot Disk..........................................
................................................................. 15
22. Replacing an Unirrored Non-Boot Disk........................................
............................................................. 16
23. Replacing a Mirrored Boot Disk..............................................
........................................................................ 17
24. Creating a Spare Disk.......................................................
................................................................................
..... 18
25. Reinstating a Spare Disk....................................................
................................................................................
. 18
26. Changing Physical Volume Boot Types.........................................
.......................................................... 19
1. Basic Tasks
2. Recognizing and Initializing in LVM a newly added Disk / LUN (Discovery / Res
can)
As a new disk is added to the server or a new LUN is assigned to the host, you h
ave to execute the following procedure for the operating system to discovery the
disk/LUN and the LVM to initialize it.
Check for the new hardware (the new devices will have a hardware path, but no de
vice file associated with it):
ioscan -fnC disk | more
Create the device file for the hardware path:
insf
If using vpaths then create the vpath association:
/opt/IBMdpo/bin/cfgvpath
/opt/IBMdpo may not be the path of the sdd software, so run "whereis cfgvpath" t
o find the correct path.
Verify the new devices / vpaths:
ioscan -fnC disk
strings /etc/vpath.cfg
/opt/IBMdpo/bin/showvpath
Create the physical volume (Initialize the disk for use with LVM).
For each disk (not vpath) issue:
pvcreate -f /dev/rdsk/cxtxdx
For vpaths this information can be found in the file /etc/vpath.cfg.
To simply remove a physical volume that's not used and referred by LVM structure
s (volume groups and logical volumes) follow this quick procedure.
To remove a physical volume used and referred by LVM structures you first have t
o remove the logical volumes and the volume groups which rely on it by following
the procedures detailed in the sections below (Removing a Logical Volume and Re
moving a Volume Group) and then you can issue the commands in this section.
Identify the hardware path of the disk to remove:
ioscan -fnC disk
Remove the special device file:
rmsf -H <HW_path_from_ioscan_output>
Before removing a volume group, backup the data on the volumes of the volume gro
up.
Identify all of the logical volumes in this volume group:
vgdisplay -v /dev/vg01
OR:
vgdisplay -v vg01 | grep "LV Name" | awk '{print $3}'
Kill the processes using the volumes and unmount all of the logical volumes with
a command such as the following repeated for each of the volumes in the volume
group:
fuser -ku /dev/vg01/lvhome
umount /dev/vg01/lvhome
You can avoid issuing manually the above two commands for each of the volumes in
the volume group by using a for loop such as the following (customize it for yo
ur needs):
for vol_name in vgdisplay -v vg01 | grep "LV Name" | awk '{print $3}'; do fuser
-ku /dev/vg01/$vol_name; sleep 2; umount -f $vol_name; echo Umounted $vol_name;
done
Check all of the logical volumes in the volume group are unmounted:
bdf
bdf | grep vg01
After you have freed and umounted all of the logical volumes, remove the volume
group and make some check:
vgexport /dev/vg01
vgdisplay -v vg01
vgdisplay -v | grep vg01
vgdisplay -v
Note: using vgexport to remove a volume group is easier and faster than using th
e lvremove on each logical volume, the vgreduce on each of the physical volumes
(except the last one) and a vgremove.
Moreover, another advantage is that the /dev/vg01 directory is also removed.
Anyway, for the right of information, this is the common alternative procedure y
ou can follow instead of using the vgexport.
After you have freed and unmounted all of the logical volume by using the fuser
and the umount, issue the following commands for each logical volume in the volu
me group:
lvremove /dev/vg01/lvoln
Then, issue the following command for each disk in the volume group:
vgreduce /dev/vg01 /dev/disk/diskn
You can avoid issuing manually the above two commands for each of the volumes an
d disks in the volume group by using a for loop such as the following (customize
it for your needs):
for vol_name in vgdisplay -v vg01 | grep "LV Name" | awk '{print $3}'; do lvremo
ve /dev/vg01/$vol_name; sleep 2; echo Removed $vol_name; done
for disk_name in `vgdisplay -v | grep "PV Name" | cut -c 36-45`; do vgreduce /de
v/vg01 /dev/disk/$disk_name; sleep 2; echo Removed $disk_name; done
Finally, remove the volume group and make some check:
vgremove vg01
vgdisplay -v vg01
vgdisplay -v | grep vg01
vgdisplay -v
If the system on which you're creating the volume group is a node of an HP Servi
ceGuard Cluster, then you have to present the new structure to the cluster to ma
ke it aware of it.To do this follow the steps about deploying LVM configuration
on HP ServiceGuard Cluster nodes at the end of the section "Creating a Volume Gr
oup".
4. Copy map file and the package control file to all nodes by using rcp or scp:
rcp /etc/cmcluster/package_name/package_name.run gldev:/etc/cmcluster/package_na
me/
5. Check and note the minor number in group file for the volume group:
ls -l /dev/vgdata/group
6. On the alternate node, create the control file using the same minor number of
the primary node noted at step 5 (always on the alternate node):
mknod /dev/vgdata/group c 64
7. Import the volume group on the alternate node:
vgimport -s -m /tmp/vgdata.map /dev/vgdata
8. Check wether the volume group can be activated on the alternate node, make a
backup of the volume group configuration:
vgchange -a y /dev/vgdata
vgcfgbackup /dev/vgdata
9. Always on the alternate node, create the mount point directory and assign the
permissions accordingly with the primary node:
mkdir /datavol
chmod 755 /datavol
chown oracle:oragroup /datavol
10. Mount the filesytem which relies on the logical volume on the alternate node
:
mount /dev/vgdata/rlvdata /datavol
bdf
11. Unmount the filesystem, deactivate the volume group on the alternate node an
d check:
umount /datavol
vgchange -a n /dev/vgdata
vgdisplay -v vgdata
If the environment uses mirrored individual disks in physical volume groups (PVG
s), check the /etc/lvmpvg file to ensure that each physical volume group contain
s the correct physical volume names for the alternate node.
When you use PVG-strict mirroring, the physical volume group configuration is re
corded in the /etc/lvmpvg file on the configuration node: this file defines the
physical volume group swhich at the basis of mirroring and indicates which physi
cal volumes belong to each physical volume group.
On each cluster's node, the /etc/lvmpvg file must contain the correct physical v
olume names for the physical volume groups s disks as they are known on that node.
Physical volume names for the same disks could bedifferent on different nodes.
After distributing volume groups to other nodes, make sure each node s /etc/lvmpvg
file correctly reflects the contents of all physical volume groups on that node
.
The logical volume extension consists of extendind the volume itself and then th
12. Adding a Disk, Creating a Volume Group and Creating a Logical Volume
14. Adding a LUN / External Disk, Extending the Volume Group and the Logical Vol
ume
by Using SAM:
Open SAM
Select Disks and File Systems --> Volume Groups
Arrow down to the volume group you want to extend (from bdf) and hit the spac
e bar to select it
4) Tab once to get to the menu at the top and then arrow over to "Actions" and h
it enter --> 5) Select "Extend" from the Actions menu
6) Select "Select Disk(s)..." and hit enter
7) Select the appropriate disk to add with the space bar and select OK, then Sel
ect OK which will expand the volum
8) Exit SAM
Check the File System Mounted on the Logical Volume to Extend and Take Note of t
he Space Info (kbytes used avail %used):
bdf /oradata
Extend the Logical Volume:
lvextend -L 11776 /dev/vg01/lvol3
Defragment the File System Mounted on the Logical Volume:
fsadm -d -D -e -E /oradata
Extend the File System Mounted on the Logical Volume:
fsadm -F vxfs -b 11776M /oradata
Check the File System Info to Verify the Current Space:
bdf /oradata
17. Moving Disks Within a System (LVM Configuration with Persistent Device Files
)
18. Moving Disks Within a System (LVM Configuration with Legacy Device Files)
t (the group file in this example has a major number of 64 and a minor number of
0x01000000):
mkdir /dev/vgnn
mknod /dev/vgnn/group c 64 0x010000
Add the volume group entry back to the LVM configuration files:
vgimport -v -s -m /tmp/vgnn.map /dev/vgnn
Activate the newly imported volume group:
vgchange -a y /dev/vgnn
Back up the volume group configuration:
vgcfgbackup /dev/vgnn
Make the volume group and its associated logical volumes unavailable to users:
vgchange -a n /dev/vg_planning
Preview the removal of the volume group information from the LVM configuration f
iles:
vgexport -p -v -s -m /tmp/vg_planning.map /dev/vg_planning
Remove the volume group information:
vgexport -v -s -m /tmp/vg_planning.map /dev/vg_planning
Connect the disks to the new system and copy the /tmp/vg_planning.map file to th
e new system.
Create the volume group Device Files:
mkdir /dev/vg_planning
mknod /dev/vg_planning/group c 64 0x010000
chown -R root:sys /dev/vg_planning
chmod 755 /dev/vg_planning
chmod 640 /dev/vg_planning/group
Get device file information about the disks:
ioscan -funN -C disk
Import the volume group:
vgimport -N -v -s -m /tmp/vg_planning.map /dev/vg_planning
Activate the newly imported volume group:
vgchange -a y /dev/vg_planning
If the disk is not hot-swappable, power off the system to replace it.
If the disk is hot-swappable, detach it:
pvchange -a N /dev/disk/disk14
Physically Replace the disk.
If the system was not rebooted, Notify the mass storage subsystem that the disk
has been replaced:
scsimgr replace_wwid D /dev/rdisk/disk14
Determine the new LUN instance number for the replacement disk:
ioscan m lun
In this example, LUN instance 28 was created for the new disk, with LUN hardware
path 64000/0xfa00/0x1c, device special files /dev/disk/disk28 and /dev/rdisk/di
sk28, at the same lunpath hardware path as the old disk, 0/1/1/1.0x3.0x0. The ol
d LUN instance 14 for the old disk now has no lunpath associated with it.
If the system was rebooted to replace the failed disk, then ioscan -m lun does n
ot display the old disk.
Assign the old instance number to the replacement disk (this assigns the old LUN
instance number 14 to the replacement disk and the device special files for the
new disk are renamed to be consistent with the old LUN instance number):
io_redirect_dsf -d /dev/disk/disk14 -n /dev/disk/disk28
ioscan m lun /dev/disk/disk14
Restore LVM configuration information to the new disk:
vgcfgrestore -n /dev/vgnn /dev/rdisk/disk14
Restore LVM access to the disk (if the disk is hot-swappable):
pvchange a y /dev/disk/disk14
If the disk is not hot-swappable and you had reboot the system, reattach the dis
k by reactivating the volume group as follows:
vgchange -a y /dev/vgnn
Initialize boot information on the disk:
lvlnboot -v
lvlnboot -R /dev/vg00
lvlnboot -v
After a failed disk has been repaired or a decision has been made to replace it,
follow these steps to reinstate it and return the spare disk to its former stan
dby status.
Physically connect the new or repaired disk.
Restore the LVM configuration:
vgcfgrestore -n /dev/vg01 /dev/rdisk/disk1
Ensure the volume group has been activated:
vgchange -a y /dev/vg01
Be sure that allocation of extents is now allowed on the replaced disk:
pvchange -x y /dev/disk/disk1
Move the data from the spare to the replaced physical volume:
pvmove /dev/disk/disk3 /dev/disk/disk1
The data from the spare disk is now back on the original disk or its replacement
, and the spare disk is returned to its role as a standby empty disk.
vgcfgrestore -l -v -n vg01
With non-LVM disks, a single root disk contains all the attributes needed for bo
ot, system files, primary swap, and dump. Using LVM, a single root disk is repla
ced by a pool of disks, a root volume group, which contains all of the same elem
ents but allowing a root logical volume, a boot logical volume, a swap logical v
olume, and one or more dump logical volumes. Each of these logical volumes must
be contiguous, that is, contained on a single disk, and they must have bad block
relocation disabled.
If you newly install your HP-UX system and choose the LVM configuration, a root
volume group is automatically configured (/dev/vg00), as are separate root (/dev
/vg00/lvol3) and boot (/dev/vg00/lvol1) logical volumes. If you currently have a
combined root and boot logical volume and you want to reconfigure to separate t
hem after creating the boot logical volume, use the lvlnboot command with the -b
option to define the boot logical volume to the system, taking effect the next
time the system is booted.
If you create your root volume group with multiple disks, use the lvextend comma
nd to place the boot, root, and primary swap logical volumes on the boot disk.
You can use pvmove to move the data from an existing logical volume to another d
isk if necessary to make room for the root logical volume.
Create a bootable physical volume:
On an HP Integrity server, partition the disk using the idisk command and a part
ition description file, then run insf.
Run pvcreate with the -B option.
On an HP Integrity server, use the device file denoting the HP-UX partition:
pvcreate -B /dev/rdisk/disk6_p2
On an HP 9000 server, use the device file for the entire disk:
pvcreate -B /dev/rdisk/disk6
Create a directory for the volume group:
mkdir /dev/vgroot
mknod /dev/vgroot/group c 64 0xnn0000
Create the root volume group, specifying each physical volume to be included:
vgcreate /dev/vgroot /dev/disk/disk6
Place boot utilities in the boot area:
mkboot /dev/rdisk/disk6
Add an autoboot file to the disk boot area:
mkboot -a "hpux" /dev/rdisk/disk6
Create the boot logical volume:
lvcreate -C y -r n -n bootlv /dev/vgroot # lvextend
/disk/disk6
After you create mirror copies of the root, boot, and primary swap logical volum
es, if any of the underlying physical volumes fail, the system can use the mirro
r copy on the other disk and continue. When the failed disk comes back online, i
t is automatically recovered, provided the system has not been rebooted.
If the system reboots before the disk is back online, reactivate the volume grou
p to update the LVM data structures that track the disks within the volume group
. You can use vgchange -a y even though the volume group is already active.
To reactivate volume group vg00:
vgchange -a y /dev/vg00
As a result, LVM scans and activates all available disks in the volume group vg
00, including the disk that came online after the system rebooted.
The procedure for creating a mirror of the boot disk is different for HP 9000 an
d HP Integrity servers. HP Integrity servers use partitioned boot disks.
setboot
a 0/1/1/0.0x1.0x0
If lvextend fails with the message "m: illegal option HP MirrorDisk/UX is not in
stalled".
Update the root volume group information:
lvlnboot -R /dev/vg00
lvlnboot -v
Specify the mirror disk as the alternate boot path in nonvolatile memory:
setboot a 0/1/1/0.0x1.0x0
Add a line to /stand/bootconf for the new boot disk:
vi /stand/bootconf l /dev/disk/disk2_p2
You can split a mirrored logical volume into two logical volumes to perform a ba
ckup on an offline copy while the other copy stays online. When you complete the
backup of the offline copy, you can merge the two logical volumes back into one
. To bring the two copies back in synchronization, LVM updates the physical exte
nts in the offline copy based on changes made to the copy that remained in use.
You can use HP SMH to split and merge logical volumes, or use the lvsplit and lv
merge commands.
To back up a mirrored logical volume containing a file system, using lvsplit and
lvmerge, follow these steps:
Split the logical volume /dev/vg00/lvol1 into two separate logical volumes:
lvsplit /dev/vg00/lvol1
Perform a file system consistency check on the logical volume to be backed up:
fsck /dev/vg00/lvol1b
Mount the file system:
mkdir /backup_dir
mount /dev/vg00/lvol1b /backup_dir
Perform the backup.
Unmount the file system:
umount /backup_dir
Merge the split logical volume back with the original logical volume:
lvmerge /dev/vg00/lvol1b /dev/vg00/lvol1
If you back up your volume group configuration, you can restore a corrupted or l
ost LVM configuration in the event of a disk failure or corruption of your LVM c
onfiguration information.
It is important that volume group configuration information be saved whenever yo
u make any change to the configuration such as adding or removing disks to a vol
ume group, changing the disks in a root volume group, creating or removing logic
al volumes, extending or reducing logical volumes, etc...
By default, vgcfgbackup saves the configuration of a volume group to the file /e
tc/lvmconf/volume_group_name.conf.
Backup Configuration:
vgcfgbackup -f pathname/filename volume_group_name
To run vgcfgrestore, the physical volume must be detached.
Restore Configuration (using the default backup file /etc/lvmconf/vgsales.conf):
pvchange -a n /dev/disk/disk5
vgcfgrestore -n /dev/vgsales /dev/rdisk/disk5
pvchange -a y /dev/disk/disk5
If the physical volume is not mirrored or the mirror copies are not current and
available, you must deactivate the volume group with vgchange, perform the vgcfg
restore, and activate the volume group:
vgchange -a n /dev/vgsales
vgcfgrestore -n /dev/vgsales /dev/rdisk/disk5
vgchange -a y /dev/vgsales
If you plan to use a disk management utility to create a backup image or snapshot
of all the disks in a volume group, you must make sure that LVM is not writing t
o any of the disks when the snapshot is being taken; otherwise, some disks can c
ontain partially written or inconsistent LVM metadata.
To keep the volume group disk image in a consistent state, you must either deact
ivate the volume group or quiesce it.
Deactivating the volume group requires you to close all the logical volumes in t
he volume group, which can be disruptive.
Quiescing the volume group enables you to keep the volume group activated and th
e logical volumes open during the snapshot operation, minimizing the impact to y
our system.
You can quiesce both read and write operations to the volume group, or just writ
e operations.
While a volume group is quiesced, the vgdisplay command reports the volume group
access mode as quiesced.
The indicated I/O operations queue until the volume group is resumed, and comman
ds that modify the volume group configuration fail immediately.
By default, the volume group remains quiesced until it is explicitly resumed.
You can specify a maximum quiesce time in seconds using the -t option of the vgc
hange command: if the quiesce time expires, the volume group is resumed automati
cally.
The vgchange -Q option indicates the quiescing mode, which can be rw.
To quiesce a volume group for a maximum of ten minutes (600 seconds):
vgchange -Q w -t 600 vg08
To resume a quiesced volume group:
vgchange -R vg08
Because of the contiguous allocation policy, you cannot extend the swap logical
volume: so to increase the primary swap you create a bigger logical volume and m
odify the Boot Data Reserved Area (BDRA) to make it primary.
Create the logical volume on the volume group vg00:
lvcreate -C y -L 240 /dev/vg00
As the name of this new logical volume will be displayed on the screen, note it:
it will be needed later.
To ease the example, we'll assume now the name of the new volume is /dev/vg00/lv
ol8.
Display the current root and swap logical volumes (lvol2 is the default primary
swap):
lvlnboot -v /dev/vg00
Specify lvol8 is the primary swap logical volume:
lvlnboot -s /dev/vg00/lvol8 /dev/vg00
Recover any missing links to all of the logical volumes in the BDRA and update t
he BDRA of each bootable physical volume in the volume group.
Update the root volume group information:
lvlnboot -R /dev/vg00
Reboot the system:
init 6
Imagine an SC10 rack with 2 controllers. Some of the disks are on 1 controller,
some on the controller 2. For high availability you would want a logical volume
to be created on a disk that is on one controller and mirrored on a disk that is
on another controller.However, the concept of "controller" is unknown in LVM.
Hence PVG's.
You create one for the disks on one controller, another one for the disks on the
other controller, then you make logical volumes PVG-strict (lvchange -s g ...).
PVGs increase not only I/O high availability but also performance.
By using PVGs physical volumes can be grouped by controllers: then logical volum
es can be created on different PVGs. This way PVGs allow to know where each disk
is and what channels to mirror down, so with careful planning and diligent use
of the LVM commands you can ensure I/O separation without using LVM.
For example, to use two PVGs in vg01 with c1t6d0 and c2t6d0 in one PVG (PVG0), c
3t6d0 and c4t6d0 in the other PVG (PVG1) the contents of the file /etc/lvmpvg sh
ould be:
VG /dev/vg01
PVG PVG0
/dev/dsk/c1t6d0
/dev/dsk/c2t6d0
PVG PVG1
/dev/dsk/c3t6d0
/dev/dsk/c4t6d0
Then create the physical volume groups:
vgcreate -g pvg1 /dev/vgname /dev/dsk/c5t8d0 /dev/dsk/c5t9d0
vgextend -g pvg2 /dev/vgname /dev/dsk/c7t1d0 /dev/dsk/c7t2d0
vgdisplay -v vgname
If the system on which you're creating the volume group is a node of an HP Servi
ceGuard Cluster, then you have to present the new structure to the cluster to ma
ke it aware of it.To do this follow the steps about deploying LVM configuration
on HP ServiceGuard Cluster nodes at the end of the section "Creating a Volume Gr
oup".
After creating the /etc/lvmpvg file as describe above in the previous section, e
ach copy of the mirror you create could be force on different PVG.
To create a logical volume in the physical volume group:
lvcreate -m 1 -n volume_name -L size_in_mb -s g /dev/pvg_name
lvdisplay -v lvhome
To create a RAID 0 + 1 you need at least two disks in each PVG, then you use the
"-s g & -D Y" options of the lvcreate command during the logical volume creatio
n.
If the logical volume is already created but not mirrored yet, issue the follow
ing command:
lvchange -s g /dev/vg01/lvhome
lvextend -m 1 /dev/vg01/lvhome
lvdisplay -v lvhome
If the system on which you're creating the volume group is a node of an HP Servi
ceGuard Cluster, then you have to present the new structure to the cluster to ma
ke it aware of it.To do this follow the steps about deploying LVM configuration
on HP ServiceGuard Cluster nodes at the end of the section "Creating a Logical V
olume and Mounting the File System".