RAID drive groups also improve data storage reliability and fault tolerance compared to single-drive storage
systems. Data loss resulting from a drive failure can be prevented by reconstructing missing data from the
remaining drives.
The following list describes some of the most commonly used RAID levels:
•
RAID 0
: block-level striping without parity or mirroring
Simple stripe sets are normally referred to as RAID 0. RAID 0 uses striping to provide high data
throughput, especially for large files in an environment that does not require fault tolerance. RAID 0 has no
redundancy and it provides improved performance and additional storage without fault tolerance. Any
drive failure destroys the array and the likelihood of failure increases with more drives in the array. RAID
0 does not implement error checking, so any error is uncorrectable. More drives in the array means
higher bandwidth, but greater risk of data loss.
RAID 0 requires a minimum number of two hard disk drives.
•
RAID 1
: mirroring without parity or striping
RAID 1 uses mirroring so that data written to one drive is simultaneously written to another drive. This is
good for small databases or other applications that require small capacity but complete data redundancy.
RAID 1 provides fault tolerance from disk errors or failures and continues to operate as long as at least
one drive in the mirrored set is functioning. With appropriate operating system support, there can be
increased read performance and only a minimal write performance reduction.
RAID 1 requires a minimum number of two hard disk drives.
•
RAID 5
: block-level striping with distributed parity
RAID 5 uses disk striping and parity data across all drives (distributed parity) to provide high data
throughput, especially for small random access. RAID 5 distributes parity along with the data and requires
all drives but one to be present to operate; drive failure requires replacement, but the array is not
destroyed by a single drive failure. Upon drive failure, any subsequent read operations can be calculated
from the distributed parity so that the drive failure is masked from the end user. The array will have data
loss in the event of a second drive failure and is vulnerable until the data that was on the failing drive is
rebuilt onto a replacement drive. A single drive failure in the set will result in reduced performance of
the entire set until the failing drive has been replaced and rebuilt.
RAID 5 requires a minimum number of three hard disk drives.
•
RAID 10
: a combination of RAID 0 and RAID 1
RAID 10 consists of striped data across mirrored spans. A RAID 10 drive group is a spanned drive
group that creates a striped set from a series of mirrored drives. RAID 10 allows a maximum of eight
spans. You must use an even number of drives in each RAID virtual drive in the span. The RAID 1
virtual drives must have the same stripe size. RAID 10 provides high data throughput and complete data
redundancy but uses a larger number of spans.
RAID 10 requires a minimum number of four hard disk drives and also requires an even number of drives,
for example, six hard disk drives or eight hard disk drives.
Configuring the system BIOS to enable onboard SATA RAID functionality
This describes how to configure the system BIOS to enable onboard SATA RAID functionality.
Note:
Use the arrow keys on the keyboard to make selections.
To enable SATA RAID functionality, do the following:
1. Start the Setup Utility program. See “Starting the Setup Utility program” on page 21.
2. Select
Devices
➙
ATA Drive Setup
.
3. Select
Configure SATA as
and press Enter.
4. Select
RAID Mode
and press Enter.
32
ThinkServer TS140 Hardware Maintenance Manual
Summary of Contents for 70A0
Page 1: ...ThinkServer TS140 Hardware Maintenance Manual Machine Types 70A0 70A1 70A4 and 70A5 ...
Page 16: ...4 ThinkServer TS140 Hardware Maintenance Manual ...
Page 18: ...6 ThinkServer TS140 Hardware Maintenance Manual ...
Page 30: ...18 ThinkServer TS140 Hardware Maintenance Manual ...
Page 32: ...20 ThinkServer TS140 Hardware Maintenance Manual ...
Page 106: ...94 ThinkServer TS140 Hardware Maintenance Manual ...
Page 120: ...108 ThinkServer TS140 Hardware Maintenance Manual ...
Page 121: ......
Page 122: ......