Useful Solaris Commands
Useful Solaris Commands
Useful Solaris Commands
truss -c (Solaris >= 8): This astounding option to truss provides a profile summary of the
command being trussed:
It can also show profile data on a running process. In this case, the data shows what the
process did between when truss was started and when truss execution was terminated
with a control-c. It’s ideal for determining why a process is hung without having to wade
through the pages of truss output.
truss -d and truss -D (Solaris >= 8): These truss options show the time associated with
each system call being shown by truss and is excellent for finding performance problems
in custom or commercial code. For example:
$ truss -d who
Base time stamp: 1035385727.3460 [ Wed Oct 23 11:08:47 EDT 2002 ]
0.0000 execve(“/usr/bin/who”, 0xFFBEFD5C, 0xFFBEFD64) argc = 1
0.0032 stat(“/usr/bin/who”, 0xFFBEFA98) = 0
0.0037 open(“/var/ld/ld.config”, O_RDONLY) Err#2 ENOENT
0.0042 open(“/usr/local/lib/libc.so.1”, O_RDONLY) Err#2 ENOENT
0.0047 open(“/usr/lib/libc.so.1”, O_RDONLY) = 3
0.0051 fstat(3, 0xFFBEF42C) = 0
. . .
truss -D is even more useful, showing the time delta between system calls:
truss -T: This is a great debugging help. It will stop a process at the execution of a
specified system call. (“-U” does the same, but with user-level function calls.) A core
could then be taken for further analysis, or any of the /proc tools could be used to
determine many aspects of the status of the process.
truss -l (improved in Solaris 9): Shows the thread number of each call in a multi-threaded
processes. Solaris 9 truss -l finally makes it possible to watch the execution of a multi-
threaded application.
Truss is truly a powerful tool. It can be used on core files to analyze what caused the
problem, for example. It can also show details on user-level library calls (either system
libraries or programmer libraries) via the “-u” option.
plimit (Solaris >= 8): This command displays and sets the per-process limits on a running
process. This is handy if a long-running process is running up against a limit (for
example, number of open files). Rather than using limit and restarting the command,
plimit can modify the running process.
coreadm (Solaris >= 8): In the “old” days (before coreadm), core dumps were placed in
the process’s working directory. Core files would also overwrite each other. All this and
more has been addressed by coreadm, a tool to manage core file creation. With it, you
can specify whether to save cores, where cores should be stored, how many versions
should be retained, and more. Settings can be retained between reboots by coreadm
modifying /etc/coreadm.conf.
pgrep (Solaris >= 8): pgrep searches through /proc for processes matching the given
criteria, and returns their process-ids. A great option is “-n”, which returns the newest
process that matches.
preap (Solaris >= 9): Reaps zombie processes. Any processes stuck in the “z” state (as
shown by ps), can be removed from the system with this command.
pargs (Solaris >= 9): Shows the arguments and environment variables of a process.
nohup -p (Solaris >= 9): The nohup command can be used to start a process, so that if
the shell that started the process closes (i.e., the process gets a “SIGHUP” signal), the
process will keep running. This is useful for backgrounding a task that should continue
running no matter what happens around it. But what happens if you start a process and
later want to HUP-proof it? With Solaris 9, nohup -p takes a process-id and causes
SIGHUP to be ignored.
prstat (Solaris >= 8): prstat is top and a lot more. Both commands provide a screen’s
worth of process and other information and update it frequently, for a nice window on
system performance. prstat has much better accuracy than top. It also has some nice
options. “-a” shows process and user information concurrently (sorted by CPU hog, by
default). “-c” causes it to act like vmstat (new reports printed below old ones). “-C”
shows processes in a processor set. “-j” shows processes in a “project”. “-L” shows per-
thread information as well as per-process. “-m” and “-v” show quite a bit of per-process
performance detail (including pages, traps, lock wait, and CPU wait). The output data can
also be sorted by resident-set (real memory) size, virtual memory size, execute time, and
so on. prstat is very useful on systems without top, and should probably be used instead
of top because of its accuracy (and some sites care that it is a supported program).
trapstat (Solaris >= 9): trapstat joins lockstat and kstat as the most inscrutable
commands on Solaris. Each shows gory details about the innards of the running operating
system. Each is indispensable in solving strange happenings on a Solaris system. Best of
all, their output is good to send along with bug reports, but further study can reveal useful
information for general use as well.
vmstat -p (Solaris >= 8): Until this option became available, it was almost impossible
(see the “se toolkit”) to determine what kind of memory demand was causing a system to
page. vmstat -p is key because it not only shows whether your system is under memory
stress (via the “sr” column), it also shows whether that stress is from application code,
application data, or I/O. “-p” can really help pinpoint the cause of any mysterious
memory issues on Solaris.
pmap -x (Solaris >= 8, bugs fixed in Solaris >= 9): If the process with memory problems
is known, and more details on its memory use are needed, check out pmap -x. The target
process-id has its memory map fully explained, as in:
# pmap -x 1779
1779: -ksh
Address Kbytes RSS Anon Locked Mode Mapped File
00010000 192 192 - - r-x-- ksh
00040000 8 8 8 - rwx-- ksh
00042000 32 32 8 - rwx-- [ heap ]
FF180000 680 664 - - r-x-- libc.so.1
FF23A000 24 24 - - rwx-- libc.so.1
FF240000 8 8 - - rwx-- libc.so.1
FF280000 568 472 - - r-x-- libnsl.so.1
FF31E000 32 32 - - rwx-- libnsl.so.1
FF326000 32 24 - - rwx-- libnsl.so.1
FF340000 16 16 - - r-x-- libc_psr.so.1
FF350000 16 16 - - r-x-- libmp.so.2
FF364000 8 8 - - rwx-- libmp.so.2
FF380000 40 40 - - r-x-- libsocket.so.1
FF39A000 8 8 - - rwx-- libsocket.so.1
FF3A0000 8 8 - - r-x-- libdl.so.1
FF3B0000 8 8 8 - rwx-- [ anon ]
FF3C0000 152 152 - - r-x-- ld.so.1
FF3F6000 8 8 8 - rwx-- ld.so.1
FFBFE000 8 8 8 - rw--- [ stack ]
-------- ------- ------- ------- -------
total Kb 1848 1728 40 -
Here we see each chunk of memory, what it is being used for, how much space it is taking
(virtual and real), and mode information.
df -h (Solaris >= 9): This command is popular on Linux, and just made its way into
Solaris. df -h displays summary information about file systems in human-readable form:
$ df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 4.8G 1.7G 3.0G 37% /
/proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
fd 0K 0K 0K 0% /dev/fd
swap 848M 40K 848M 1% /var/run
swap 849M 1.0M 848M 1% /tmp
/dev/dsk/c0t0d0s7 13G 78K 13G 1% /export/home
Contents
1. Overview
2. Examining the Disks In Our Example
3. Partitioning the Disks
4. State Database - (State Database Replicas)
o Creating the (Initial) First Four State Database Replicas
o Creating the Next Seven State Database Replicas
o Creating Two State Database Replicas On the Same Slice
o Query All State Database Replicas
o Deleting a State Database Replica
5. Creating a Stripe - (RAID 0)
6. Creating a Concatenation - (RAID 0)
7. Creating Mirrors - (RAID 1)
o Create a Mirror From Unused Slices
o Create a Mirror From a File System That Can Be Unmounted
o Create a Mirror From a File System That Cannot Be Unmounted
o Create a Mirror From swap
o Create a Mirror From root (/)
8. Creating a RAID 5 Volume - (RAID 5)
9. Creating Hot Spare
Overview
For all examples in this document, I will be utilizing a Sun Blade 150
connected to a Sun StorEDGE D1000 Disk Array containing twelve
9.1GB / 10000 RPM / UltraSCSI disk drives for a total disk array capacity
of 108GB. The disk array is connected to the Sun Blade 150 using a Dual
Differential Ultra/Wide SCSI (X6541A) host adapter. In the Sun
StorEDGE D1000 Disk Array, the system identifies the drives as follows:
Controller 1 Controller 2
c1t0d0 - (d0) c2t0d0 - (d0)
c1t1d0 - (d0) c2t1d0 - (d1)
c1t2d0 - (d1) c2t2d0 - (d1)
c1t3d0 - (d20) c2t3d0 - (d20)
c1t4d0 - (d3) c2t4d0 - (d3)
c1t5d0 - (d3) c2t5d0 - (d4)
d0 : RAID 0 - Stripe
d1 : RAID 0 - Concatenation
d20 : RAID 1 - Mirror
d3 : RAID 5
d4 : Hot Spare
From the configuration above, you can see we have plenty of disk drives
to utilize for our examples! For the examples in this article, I will only be
using several of the disks within the D1000 array - in many cases, just
enough to demonstrate the use of the Volume Manager commands and
component configuration.
Partitioning the Disks
Volumes in Volume Manager are built from slices (disk partitions). If the
disks you plan on using as volumes have not been partitioned, do so now.
For the twelve 9.1GB disk drives within the D1000 Disk Array, I use the
same partition sizes and layout. By convention, I will use slice 7 for the
entire disk for storing the actual data. I will also use slice 7 to store the
state database replicas for each of the tweleve disks. Also by convention,
I will use slice 2 as the backup partition.
The following is the partition tables from one of the twelve hard drives:
format> verify
Use the format(1M) command to edit the partition table, label the disks,
and set the volume name.
State database replicas are created on disk slices using the metadb
command. Keep in mind that state database replicas can only be created
on slices that are not in use. (i.e. have no file system or being used to store
RAW data). You cannot create state database replicas on slices on
partitions that contain a file system, root (/), /usr, or swap. State database
replicas can be created on slices that will be part of a volume, but will
need to be created BEFORE adding the slice to a volume.
In the following example, I will create one state database replica on each
of the first 11 disk drives in the D1000 Disk Array using the metadb
command. On the twelfth disk, I will give an example of how to create
two state database replicas on the same slice. In total I will be creating 13
state database replicas on 12 twelve disks. The replicas will be created on
slice 7 for each disk. (This is the slice that we created to be be used for
each disk in the disk array.) I will create the 13 state database replicas on
the tweleve disks using the following methods:
1. The first four initial state database replicas on the first four disks in
the disk array using the -a and -f command line options to the
metadb command.
2. Then create seven more replicas just using the -a option to the
metadb command.
3. Then use the -c option to the metadb command on the twelfth disk
to give an example of how to create two replicas on a single slice.
# metadb -d c2t4d0s7
• The -d deletes all replicas that are located on the specified slice.
The /etc/system file is automatically updated with the new
information and the /etc/lvm/mddb.cf file is updated.
# metadb -a c2t4d0s7
A RAID 0 volume (often called just a stripe) are one of the three types of
simple volumes:
NOTE: Sometimes a striped volume is called a stripe. Other times, stripe refers to the
component blocks of a striped concatenation. To "stripe" means to spread I/O requests
across disks by chunking parts of the disks and mapping those chunks to a virtual device (a
volume). Both striping and concatenation are classified as RAID Level 0.
The data in a striped volume is arranged across two or more slices. The
striping alternates equally-sized segments of data across two or more
slices to form one logical storage unit. These segments are interleaved
round-robin, so that the combined space is made alternately from each
slice. Sort of like a shuffled deck of cards.
Let's explain the details of the above example. First notice that the
new striped volume, d0, consists of a single stripe (Stripe 0) made
of three slices (c1t0d0s7, c2t0d0s7, c1t1d0s7). The -i option sets
the interlace to 32KB. (The interlace cannot be less than 8KB, nor
greater than 100MB.) If interlace were not specified on the
command line, the striped volume would use the default of 16KB.
When using the metastat command to verify our volume, we can
see from all disks belonging to Stripe 0, that this is a stripped
volume. Also, that the interlace is 32k (512 * 64 blocks) as we
defined it. The total size of the stripe is 27,135,779,328 bytes (512
* 52999569 blocks).
17. Now that we have created our simple volume (a RAID 0 stripe),
we can now pretend that the volume is a big partition (slice) on
which we can do the usual file system things. Let's now create a
UFS file system using the newfs command. I want to create a UFS
file system with an 8KB block size:
18.# newfs -i 8192 /dev/md/rdsk/d0
19.newfs: /dev/md/rdsk/d0 last mounted as /db0
20.newfs: construct a new file system /dev/md/rdsk/d0:
(y/n)? y
21.Warning: 1 sector(s) in last cylinder unallocated
22./dev/md/rdsk/d0: 52999568 sectors in 14759
cylinders of 27 tracks, 133 sectors
23. 25878.7MB in 923 cyl groups (16 c/g,
28.05MB/g, 3392 i/g)
24.super-block backups (for fsck -F ufs -o b=#) at:
25. 32, 57632, 115232, 172832, 230432, 288032, 345632,
403232, 460832, 518432,
26.Initializing cylinder groups:
27...................
28.super-block backups for last 10 cylinder groups at:
29. 52459808, 52517408, 52575008, 52632608, 52690208,
52747808, 52805408,
52863008, 52920608, 52978208,
32. To ensure that this new file system is mounted each time the
machine is started, insert the following line into you /etc/vfstab
file (all on one line with tabs separating the fields):
These components are made from slices. Simple volumes can be used
directly or as the basic building block for mirrors.
NOTE: You can also create a concatenated volume from a single slice. You could, for
example, create a single-slice concatenated volume. Later, when you need more storage,
you can add more slices to the concatenated volume.
3. Use the metastat command to query your new (or in our example
all) volumes:
4. # metastat
5. d1: Concat/Stripe
6. Size: 53003160 blocks (25 GB)
7. Stripe 0:
8. Device Start Block Dbase Reloc
9. c2t1d0s7 10773 Yes Yes
10. Stripe 1:
11. Device Start Block Dbase Reloc
12. c1t2d0s7 10773 Yes Yes
13. Stripe 2:
14. Device Start Block Dbase Reloc
15. c2t2d0s7 10773 Yes Yes
16.
17.d0: Concat/Stripe
18. Size: 52999569 blocks (25 GB)
19. Stripe 0: (interlace: 64 blocks)
20. Device Start Block Dbase Reloc
21. c1t0d0s7 10773 Yes Yes
22. c2t0d0s7 10773 Yes Yes
23. c1t1d0s7 10773 Yes Yes
24.
25.Device Relocation Information:
26.Device Reloc Device ID
27.c2t1d0 Yes
id1,sd@SSEAGATE_ST39102LCSUN9.0GLJP46564000019451VGF
28.c1t2d0 Yes
id1,sd@SSEAGATE_ST39102LCSUN9.0GLJU8183300002007J3Z2
29.c2t2d0 Yes
id1,sd@SSEAGATE_ST39102LCSUN9.0GLJM7285500001943H5XD
30.c1t0d0 Yes
id1,sd@SSEAGATE_ST39102LCSUN9.0GLJR76697000019460DB4
31.c2t0d0 Yes
id1,sd@SSEAGATE_ST39102LCSUN9.0GLV00222700001005J6Q7
c1t1d0 Yes
id1,sd@SSEAGATE_ST39102LCSUN9.0GLJR58209000019461YK2
Let's explain the details of the above example. First notice that the
new concatenated volume, d1, consists of three stripes (Stripe 0,
Stripe 1, Stripe 2,) each made from a single slice (c2t1d0s7,
c1t2d0s7, c2t2d0s7 respectively). When using the metastat
command to verify our volumes, we can see this is a concatenation
from the fact of having multiple Stripes. The total size of the
concatenation is 27,137,617,920 bytes (512 * 53003160 blocks).
45. To ensure that this new file system is mounted each time the
machine is started, insert the following line into you /etc/vfstab
file (all on one line with tabs separating the fields):
/dev/md/dsk/d1 /dev/md/rdsk/d1 /db1 ufs
2 yes -
Any file system including root (/), swap, and /usr, or any application
such as a database, can use a mirror. Basically, you can mirror any file
system, including existing file systems. You can also mirror large
applications, such as the data files for a database.
When creating a mirror, first create a one-way mirror, then attach a second
submirror. This starts a resync operation and ensures that data is not
corrupted.
You can create a one-way mirror for a future two- or three-way mirror.
Avoid having slices of submirrors on the same disk. Also, when possible,
use disks attached to different controllers to avoid single points-of-failure.
For maximum protection and performance, place each submirror on a
different physical disk and, when possible, on different disk controllers.
For further data availability, use hot spares with mirrors.
This section will contain the following five examples for creating different
types of two-way mirrors:
To perform the above mirror examples, I will be using the two disks:
c1t3d0 and c2t3d0. After creating each two-way mirror example, I will
be deleting the newly created mirror to get ready for the next example.
Let's explain the details of the above example. First notice that the
new mirror volume, d20, consists of two submirrors, (d21 and d22)
each made from a single slice (c1t3d0s7, c2t3d0s7 respectively).
When using the metastat command to verify our volumes, we can
see this is a mirror. The total size of the mirror is 9,045,872,640
bytes (512 * 17667720 blocks).
42. Now that we have created our simple volume (a mirror), and the
mirror resync is complete, we can now pretend that the volume is
just a regular partition (slice) on which we can do the usual file
system things. Let's now create a UFS file system using the newfs
command. I want to create a UFS file system with an 8KB block
size:
43.# newfs -i 8192 /dev/md/rdsk/d20
44.newfs: construct a new file system /dev/md/rdsk/d20:
(y/n)? y
45./dev/md/rdsk/d20: 17667720 sectors in 4920
cylinders of 27 tracks, 133 sectors
46. 8626.8MB in 308 cyl groups (16 c/g,
28.05MB/g, 3392 i/g)
47.super-block backups (for fsck -F ufs -o b=#) at:
48. 32, 57632, 115232, 172832, 230432, 288032, 345632,
403232, 460832, 518432,
49. 17123360, 17180960, 17238560, 17296160, 17353760,
17411360, 17468960,
17526560, 17584160, 17641760,
52. To ensure that this new file system is mounted each time the
machine is started, insert the following line into you /etc/vfstab
file (all on one line with tabs separating the fields):
3. Use the metainit -f to put the mounted file system's slice in a single
slice (one-way) concat/stripe. (This will be submirror1) The
following command creates one stripe that contains one slice. The
new volume will be named d21:
4. # metainit -f d21 1 1 c1t3d0s7
d21: Concat/Stripe is setup
# umount /db20
10. Edit the /etc/vfstab file so that the existing file system entry
now refers to the newly created mirror. In the following example
snippet, I commented out the original entry for the c1t3d0s7 slice
and added a new entry that refers to the newly created mirrored
volume (d20) to be mounted to /db20:
11.# /dev/dsk/c1t3d0s7 /dev/rdsk/c1t3d0s7 /db20
ufs 2 yes -
/dev/md/dsk/d20 /dev/md/rdsk/d20 /db20 ufs
2 yes -
# mount /db20
13. Use the metattach command to attach submirror2
14.# metattach d20 d22
d20: submirror d22 is attached
15. After attaching d22 (submirror2), this triggers a mirror resync. Use
the metastat command to view the progress of the mirror resync:
16.# metastat d20
17.d20: Mirror
18. Submirror 0: d21
19. State: Okay
20. Submirror 1: d22
21. State: Resyncing
22. Resync in progress: 15 % done
23. Pass: 1
24. Read option: roundrobin (default)
25. Write option: parallel (default)
26. Size: 17470215 blocks
27.
28.d21: Submirror of d20
29. State: Okay
30. Size: 17470215 blocks
31. Stripe 0:
32. Device Start Block Dbase State
Hot Spare
33. c1t3d0s7 3591 Yes Okay
34.
35.
36.d22: Submirror of d20
37. State: Resyncing
38. Size: 17470215 blocks
39. Stripe 0:
40. Device Start Block Dbase State
Hot Spare
c2t3d0s7 3591 Yes Okay
41. From the above example, we didn't create a multi-way mirror right
away. Rather, we created a one-way mirror with the metainit
command then attach the additional submirrors with the metattach
command. When the metattach command is not used, no resync
operations occur and data could become corrupted. Also, do not
create a two-mirror for a file system without first unmounting the
file system , editing the /etc/vfstab file to reference the mirrored
volume, and then mount the file system to the new mirrored
volume before attaching the second submirror.
3. Use the metainit -f to put the mounted file system's slice in a single
slice (one-way) concat/stripe. (This will be submirror1) The
following command creates one stripe that contains one slice. The
new volume will be named d21:
4. # metainit -f d21 1 1 c0t0d0s6
d21: Concat/Stripe is setup
9. Edit the /etc/vfstab file so that the file system (/usr) now refers
to the newly created mirror. In the example snippet, I commented
out the original entry for the c0t0d0s6 slice and added a new entry
that refers to the newly created mirror to be mounted to /usr:
10.# /dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /usr
ufs 1 no -
/dev/md/dsk/d20 /dev/md/rdsk/d20 /usr ufs
1 no -
# reboot
14. After attaching d22 (submirror2), this triggers a mirror resync. Use
the metastat command to view the progress of the mirror resync:
15.# metastat d20
16.d20: Mirror
17. Submirror 0: d21
18. State: Okay
19. Submirror 1: d22
20. State: Resyncing
21. Resync in progress: 8 % done
22. Pass: 1
23. Read option: roundrobin (default)
24. Write option: parallel (default)
25. Size: 16781040 blocks
26.
27.d21: Submirror of d20
28. State: Okay
29. Size: 16781040 blocks
30. Stripe 0:
31. Device Start Block Dbase State
Hot Spare
32. c0t0d0s6 0 No Okay
33.
34.
35.d22: Submirror of d20
36. State: Resyncing
37. Size: 17470215 blocks
38. Stripe 0:
39. Device Start Block Dbase State
Hot Spare
c2t3d0s7 3591 Yes Okay
40. From the above example, we didn't create a multi-way mirror right
away for the /usr file system. Rather, we created a one-way mirror
with the metainit command then attach the additional submirrors
with the metattach command (after rebooting the server). When the
metattach command is not used, no resync operations occur and
data could become corrupted. Also, do not create a two-mirror for
a file system without first editing the /etc/vfstab file to
reference the mirror volume and then rebooting the server before
attaching the second submirror.
9. Edit the /etc/vfstab file so that the swap file system now refers
to the newly created mirror. In the example snippet, I commented
out the original swap entry for the c0t0d0s3 slice and added a new
entry that refers to the newly created mirror:
10.# /dev/dsk/c0t0d0s3 - - swap -
no -
/dev/md/dsk/d20 - - swap - no
-
# reboot
14. After attaching d22 (submirror2), this triggers a mirror resync. Use
the metastat command to view the progress of the mirror resync:
15.# metastat d20
16.d20: Mirror
17. Submirror 0: d21
18. State: Okay
19. Submirror 1: d22
20. State: Resyncing
21. Resync in progress: 32 % done
22. Pass: 1
23. Read option: roundrobin (default)
24. Write option: parallel (default)
25. Size: 2101200 blocks
26.
27.d21: Submirror of d20
28. State: Okay
29. Size: 2101200 blocks
30. Stripe 0:
31. Device Start Block Dbase State
Hot Spare
32. c0t0d0s3 0 No Okay
33.
34.
35.d22: Submirror of d20
36. State: Resyncing
37. Size: 17470215 blocks
38. Stripe 0:
39. Device Start Block Dbase State
Hot Spare
c2t3d0s7 3591 Yes Okay
40. Verify that the swap file system is mounted on the d20 volume:
41.# swap -l
42.swapfile dev swaplo blocks free
/dev/md/dsk/d20 85,20 16 2101184 2101184
43. From the above example, we didn't create a multi-way mirror right
away for the swap file system. Rather, we created a one-way
mirror with the metainit command then attach the additional
submirrors with the metattach command (after rebooting the
server). When the metattach command is not used, no resync
operations occur and data could become corrupted. Also, do not
create a two-mirror for a file system without first editing the
/etc/vfstab file to reference the mirror volume and then
rebooting the server before attaching the second submirror.
1. Use the following procedures to mirror the root (/) file system on a
SPARC system.
NOTE: The task for using the command-line to mirror root (/) on an x86 system is
different from the task used for a SPARC system.
When mirroring root (/), it is essential that you record the secondary root slice name to
reboot the system if the primary submirror fails. This information should be written down,
not recorded on the system, which may not be available in the event of a disk failure.
2. Use the metainit -f to put the root (/) slice in a single slice (one-
way) concat. (submirror1). (This will be submirror1)
The following command creates one stripe that contains one slice.
The new volume will be named d21:
# metaroot d20
# lockfs -fa
# reboot
The system must contain at least three state database replicas before you
can create RAID5 volumes.
Follow the 20-percent rule when creating a RAID5 volume: because of the
complexity of parity calculations, volumes with greater than about 20
percent writes should probably not be RAID5 volumes. If data redundancy
is needed, consider mirroring.
Use the same size disk slices. Creating a RAID5 volume from different
size slices results in unused disk space in the volume.
Do not create a RAID5 volume from a slice that contains an existing file
system. Doing so will erase the data during the RAID5 initialization
process.
When the disks within the RAID5 volume are completed with their
initialization phase, this is what it will look like:
# metastat d3
d3: RAID
State: Okay
Interlace: 32 blocks
Size: 35331849 blocks (16 GB)
Original device:
Size: 35334720 blocks (16 GB)
Device Start Block Dbase State
Reloc Hot Spare
c1t4d0s7 11103 Yes Okay
Yes
c2t4d0s7 11103 Yes Okay
Yes
c1t5d0s7 11103 Yes Okay
Yes
21. Now that we have created our RAID5 volume, we can now pretend
that the volume is a big partition (slice) on which we can do the
usual file system things. Let's now create a UFS file system using
the newfs command. I want to create a UFS file system with an
8KB block size:
22.# newfs -i 8192 /dev/md/rdsk/d3
23.newfs: construct a new file system /dev/md/rdsk/d3:
(y/n)? y
24.Warning: 1 sector(s) in last cylinder unallocated
25./dev/md/rdsk/d3: 35331848 sectors in 9839
cylinders of 27 tracks, 133 sectors
26. 17251.9MB in 615 cyl groups (16 c/g,
28.05MB/g, 3392 i/g)
27.super-block backups (for fsck -F ufs -o b=#) at:
28. 32, 57632, 115232, 172832, 230432, 288032, 345632,
403232, 460832, 518432,
29.Initializing cylinder groups:
30.............
31.super-block backups for last 10 cylinder groups at:
32. 34765088, 34822688, 34880288, 34933280, 34990880,
35048480, 35106080,
35163680, 35221280, 35278880,
35. To ensure that this new file system is mounted each time the
machine is started, insert the following line into you /etc/vfstab
file (all on one line with tabs separating the fields):