
Disk Array High Availability Features
55
Produ
ct Des
c
ript
ion
RAID 3 works well for single-task applications using large block I/Os. It is not a good
choice for transaction processing systems because the dedicated parity drive is a
performance bottleneck. Whenever data is written to a data disk, a write must also be
performed to the parity drive. On write operations, the parity disk can be written to four
times as often as any other disk module in the group.
RAID 5
RAID 5 uses parity to achieve data redundancy and disk striping to enhance performance.
Data and parity information is distributed across all the disks in the RAID 5 LUN. A RAID 5
LUN consists of three or more disks. For highest availability, the disks in a RAID 5 LUN
must be in different enclosures.
If a disk fails or becomes inaccessible, the disk array can dynamically reconstruct all user
data from the data and parity information on the remaining disks. When a failed disk is
replaced, the disk array automatically rebuilds the contents of the failed disk on the new
disk. The rebuilt LUN contains an exact replica of the information it would have contained
had the disk not failed.
Until a failed disk is replaced (or a rebuild on a global hot spare is completed), the LUN
operates in degraded mode. The LUN must now use the data and parity on the remaining
disks to recreate the content of the failed disk, which reduces performance. In addition,
while in degraded mode, the LUN is susceptible to the failure of the second disk. If a
second disk in the LUN fails while in degraded mode, parity can no longer be used and all
data on the LUN becomes inaccessible.
Figure 22
illustrates the distribution of user and parity data in a five-disk RAID 5 LUN. The
the stripe segment size is 8 blocks, and the stripe size is 40 blocks (8 blocks times 5 disks).
The disk block addresses in the stripe proceed sequentially from the first disk to the
second, third, fourth, and fifth, then back to the first, and so on.
Summary of Contents for Surestore Disk Array 12h - And FC60
Page 16: ...16 ...
Page 36: ...36 Array Controller Enclosure Components Figure 9 Controller Enclosure Front View ...
Page 41: ...Array Controller Enclosure Components 41 Product Description Figure 13 Controller Fan Module ...
Page 44: ...44 Array Controller Enclosure Components Figure 15 Power Supply Fan Module ...
Page 68: ...68 Capacity Management Features ...
Page 117: ...Topologies for HP UX 117 Topology and Array Planning Figure 39 High Availability Topology ...
Page 122: ...122 Topologies for HP UX Figure 40 High Availability Distance and Capacity Topology ...
Page 126: ...126 Topologies for HP UX Figure 41 Campus Topology ...
Page 130: ...130 Topologies for HP UX Figure 43 Four Hosts Connected to Cascaded Switches ...
Page 142: ...142 Topologies for Windows NT and Windows 2000 ...
Page 158: ...158 Installing the Disk Array FC60 Figure 54 Enclosure EIA Positions for System E Racks ...
Page 161: ...Installing the Disk Enclosures 161 Installation Figure 56 Disk Enclosure Contents ...
Page 172: ...172 Installing the Controller Figure 62 Controller Enclosure Package Contents ...
Page 174: ...174 Installing the Controller Figure 63 Mounting the Controller Enclosure ...
Page 234: ...234 Adding Disk Enclosures to Increase Capacity ...
Page 274: ...274 Managing the Disk Array Using SAM Unassigned disks selected as hot spares ...
Page 345: ...HP UX Diagnostic Tools 345 5 HP UX DIAGNOSTIC TOOLS Overview 346 Support Tools Manager 347 ...
Page 350: ...350 Support Tools Manager Figure 90 mstm Interface Main Window ...
Page 358: ...358 Support Tools Manager ...
Page 440: ...440 FCC Statements USA Only ...
Page 466: ...466 Index ...