51
previous user data, but will not build parity. Recommended for testing purposes
only or when new disks are used.
Not recommended
for RAID 5, and RAID 6.
Foreground
: The array initialization process will be set at high priority. During
this time array will be non-accessible, but initialization completion time will be
shorter.
Background
: The array initialization process will have a lower priority. During
this time array will be accessible, but initialization completion time will be
longer.
Note
1
: Initializing takes a significant amount of time (approximately 2 hours per 1 TB).
Cache Policy (Default: Write Back)
Write
Back
– Any data written to the array will be stored as cache, resulting in better
I/O performance at the risk of data failures due to power outages. Data will be stored
as cache before it is physically written to the disk; when a power outage occurs, any
data in the cache will be lost.
Write
Through
– Data written to an array is directly written onto the disk, meaning
lower write performance for higher data availability. Without cache acting as a buffer,
write performance will be noticeably slower but data loss due to power outages or other
failures is significantly minimized.
Block
Size
(
default
:
64K
)
A block size of 64 KB is recommended since it gives balanced performance for most
applications.
Capacity
(
Default
:
Maximum
)
The total amount of space you want the RAID array to take up. When creating RAID
levels, disk capacities are limited by the smallest disk.
Example Capacity calculation:
A RAID 5 organizes data in the manner shown below. All parity data will become
unusable for the user and not included in the total disk capacity.
Disk 1
Disk 2
Disk 3
Disk 4
Data 1
Data 2
Data 3
Parity
Data 4
Data 5
Parity
Data 6
Data 7
Parity
Data 8
Data 9
Parity
Data 10 Data 11 Data 12
Therefore, RAID 5 capacity will be [SMALLEST DISK CAPACITY] * (number of disks – 1).