
48
Disk Array High Availability Features
The disk array uses hardware mirroring, in which the disk array automatically
synchronizes the two disk images, without user or operating system involvement. This is
unlike the software mirroring, in which the host operating system software (for example,
LVM) synchronizes the disk images.
Disk mirroring is used by RAID 1 and RAID 0/1 LUNs. A RAID 1 LUN consists of exactly
two disks: a primary disk and a mirror disk. A RAID 0/1 LUN consists of an even number of
disks, half of which are primary disks and the other half are mirror disks. If a disk fails or
becomes inaccessible, the remaining disk of the mirrored pair provides uninterrupted data
access. After a failed disk is replaced, the disk array automatically rebuilds a copy of the
data from its companion disk. To protect mirrored data from a channel or internal bus
failure, each disk in the LUN should be in a different enclosure.
Data Parity
Data parity is a second technique used to achieve data redundancy. If a disk fails or
becomes inaccessible, the parity data can be combined with data on the remaining disks in
the LUN to reconstruct the data on the failed disk. Data parity is used for RAID 3 and RAID
5 LUNs.
To ensure high availability, each disk in the LUN should be in a separate enclosure. Parity
cannot be used to reconstruct data if more than one disk in the LUN is unavailable.
Parity is calculated on each write I/O by doing a serial binary exclusive OR (XOR) of the
data segments in the stripe written to the data disks in the LUN. The exclusive OR
algorithm requires an even number of binary 1s to create a result of 0.
Figure 17
illustrates the process for calculating parity on a five-disk LUN. The data written
on the first disk is “XOR’d” with the data written on the second disk. The result is “XOR’d”
with the data on the third disk, which is “XOR’d” with the data on the fourth disk. The
result, which is the parity, is written to the fifth disk. If any bit changes state, the parity also
changes to maintain a result of 0.
Summary of Contents for Surestore Disk Array 12h - And FC60
Page 16: ...16 ...
Page 36: ...36 Array Controller Enclosure Components Figure 9 Controller Enclosure Front View ...
Page 41: ...Array Controller Enclosure Components 41 Product Description Figure 13 Controller Fan Module ...
Page 44: ...44 Array Controller Enclosure Components Figure 15 Power Supply Fan Module ...
Page 68: ...68 Capacity Management Features ...
Page 117: ...Topologies for HP UX 117 Topology and Array Planning Figure 39 High Availability Topology ...
Page 122: ...122 Topologies for HP UX Figure 40 High Availability Distance and Capacity Topology ...
Page 126: ...126 Topologies for HP UX Figure 41 Campus Topology ...
Page 130: ...130 Topologies for HP UX Figure 43 Four Hosts Connected to Cascaded Switches ...
Page 142: ...142 Topologies for Windows NT and Windows 2000 ...
Page 158: ...158 Installing the Disk Array FC60 Figure 54 Enclosure EIA Positions for System E Racks ...
Page 161: ...Installing the Disk Enclosures 161 Installation Figure 56 Disk Enclosure Contents ...
Page 172: ...172 Installing the Controller Figure 62 Controller Enclosure Package Contents ...
Page 174: ...174 Installing the Controller Figure 63 Mounting the Controller Enclosure ...
Page 234: ...234 Adding Disk Enclosures to Increase Capacity ...
Page 274: ...274 Managing the Disk Array Using SAM Unassigned disks selected as hot spares ...
Page 345: ...HP UX Diagnostic Tools 345 5 HP UX DIAGNOSTIC TOOLS Overview 346 Support Tools Manager 347 ...
Page 350: ...350 Support Tools Manager Figure 90 mstm Interface Main Window ...
Page 358: ...358 Support Tools Manager ...
Page 440: ...440 FCC Statements USA Only ...
Page 466: ...466 Index ...