failed. For this reason, do not use RAID level-5 for the quorum drive. You must
configure the quorum drive as a RAID level-1 logical drive.
Note: Define hot-spare drives in your array to minimize the time that logical
drives remain in critical state.
v
Every logical drive that is shared by the two servers must have its cache policy
set to write-through mode to ensure that data integrity is maintained. Logical
drives that are not shared between the two servers can be configured for
write-back mode for improved performance.
v
Create only one logical drive for each array.
v
You must assign merge group numbers in the range 1– 8 to each logical drive
that will be shared. Merge group numbers must be unique for each shared logical
drive in the cluster. You must assign merge group numbers 206 or 207 to the
non-shared logical drives.
v
If you are starting (booting) the operating system from a shared controller, define
the first logical drive as the startup drive and assign a merge group number for a
non-shared drive, for example, 206 for Server A.
v
The total number of logical drives per controller must be eight or less before or
after a failover. If you exceed this number, a failover will not complete.
v
Logical drives that are currently undergoing Logical Drive Migration (LDM)
operations will not failover. However, all other logical drives will failover if
necessary.
v
If a failover occurs while a critical RAID level-1 logical drive is rebuilding to a
spare disk, the Rebuild operation automatically starts a few seconds after the
failover is completed.
v
The cluster support software will initiate a synchronization of RAID level-1 and
RAID level-5 logical drives immediately after a failover. If a drive fails before this
synchronization is complete, logical drive access is placed in the blocked state
and is no longer accessible.
v
When a logical drive spans across multiple SCSI channels and a failure within
the drive subsystem occurs that is unique to a channel (for example, a
disconnected cable), the entire physical array will be identified as failed even
though access from the surviving server can occur. Therefore, you might want to
consider, if you have small arrays, not spanning across multiple channels.
Use the Validate cluster feature in the ServeRAID Manager program to verify your
cluster is properly configured.
Note: If you must initialize your cluster configuration from the controller, you must
manually reset the following settings because backup configurations do not
save these settings:
v
SCSI initiator IDs
v
Host name
v
Partner name.
Chapter 5. Preparing to install or change a cluster solution
75
Содержание Netfinity ServeRAID-4H Ultra160
Страница 1: ...IBM Netfinity User s Reference ServeRAID 4H Ultra160 SCSI Controller SC00 N913 20...
Страница 2: ......
Страница 3: ...IBM Netfinity User s Reference ServeRAID 4H Ultra160 SCSI Controller SC00 N913 20...
Страница 12: ...x IBM Netfinity User s Reference ServeRAID 4H Ultra160 SCSI Controller...
Страница 26: ...12 IBM Netfinity User s Reference ServeRAID 4H Ultra160 SCSI Controller...
Страница 74: ...60 IBM Netfinity User s Reference ServeRAID 4H Ultra160 SCSI Controller...
Страница 86: ...72 IBM Netfinity User s Reference ServeRAID 4H Ultra160 SCSI Controller...
Страница 90: ...76 IBM Netfinity User s Reference ServeRAID 4H Ultra160 SCSI Controller...
Страница 92: ...78 IBM Netfinity User s Reference ServeRAID 4H Ultra160 SCSI Controller...
Страница 106: ...92 IBM Netfinity User s Reference ServeRAID 4H Ultra160 SCSI Controller...
Страница 114: ...100 IBM Netfinity User s Reference ServeRAID 4H Ultra160 SCSI Controller...
Страница 120: ...106 IBM Netfinity User s Reference ServeRAID 4H Ultra160 SCSI Controller...
Страница 191: ......