Administering RAID Arrays
5-18
Express5800/
ftServer
: System Administrator’s Guide for the Linux Operating System
md2 : active raid1 sdb3[1] sda3[0]
31647936 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]
unused devices: <none>
#
N O T E
The device names displayed in
/proc/mdstat
are the
kernel names for each device. These are different from
the user device names displayed by the
mdadm
command.
As long as there is a missing mirror or a resynchronization in process, RAID and the
CPU-I/O enclosure are simplex for the active mirror.
Replacing a Failed Disk
When you need to replace a failed disk, the OSM plugin can automatically add the
replacement disk to a running RAID array, provided the following conditions exist:
•
The replacement disk must be blank, as defined by the current
safe mode
setting.
If safe mode is active, zero the disk's partition table and RAID superblocks. Then
remove and reinsert the disk to start the automatic disk replacement. For more
information about safe mode, see ‘‘
Configuring Safe Mode
.”
•
Do not reboot the system or stop and restart OSM after you remove the failed disk
until you have inserted the replacement disk and it has synchronized with its
partner. The information necessary to perform automatic disk replacement is not
persistent, so if OSM is restarted, the replacement disk must be paired using a
different method.
•
The failed disk must have been paired with one (and only one) partner disk. For
example, if /dev/md4 consisted of partitions sda1 and sdb1, and /dev/md5
consisted of sdb2 and sdc2, automatic disk replacement would not work for disk
sdb. In addition, partition numbers on the failed disk and its partner, for any
partitions belonging to RAID1 arrays, must be the same.
•
The failed disk must belong to a RAID 1 on top of a disk, partition, or multipath. If
the failed disk belongs to a RAID0 (even if that RAID0 is part of a RAID1), the blank
disk will not be added to the RAID array.