80
Chapter 8. Using Clustering and Locking Systems
Over half of the
lock_gulmd
servers on the nodes listed in the
cluster.ccs
file
(
cluster.ccs:cluster/lock_gulm/servers
) must be operating to process locking requests
from GFS nodes. That quorum requirement is necessary to prevent split groups of servers from
forming independent clusters — which would lead to file system corruption.
For example, if there are three
lock_gulmd
servers listed in the
cluster.ccs
configuration file,
two of those three
lock_gulmd
servers (a quorum) must be running for the cluster to operate.
A
lock_gulmd
server can rejoin existing servers if it fails and is restarted.
When running redundant lock_gulmd servers, the minimum number of nodes required is three; the
maximum number of nodes is five.
8.2.3. Starting LOCK_GULM Servers
If no
lock_gulmd
servers are running in the cluster, take caution before restarting them — you
must verify that no GFS nodes are hung from a previous instance of the cluster. If there are hung
GFS nodes, reset them before starting
lock_gulmd
servers. Resetting the hung GFS nodes before
starting
lock_gulmd
servers prevents file system corruption. Also, be sure that all nodes running
lock_gulmd
can communicate over the network; that is, there is no network partition.
The
lock_gulmd
server is started with no command line options.
8.2.4. Fencing and LOCK_GULM
Cluster state is managed in the
lock_gulmd
server. When GFS nodes or server nodes fail, the
lock_gulmd
server initiates a fence operation for each failed node and waits for the fence to complete
before proceeding with recovery.
The master
lock_gulmd
server fences failed nodes by calling the
fence_node
command with the
name of the failed node. That command looks up fencing configuration in CCS to carry out the fence
operation.
When using RLM, you need to use a fencing method that shuts down and reboots a node. With RLM
you
cannot
use any method that does not reboot the node.
8.2.5. Shutting Down a LOCK_GULM Server
Before shutting down a node running a LOCK_GULM server,
lock_gulmd
should be terminated
using the
gulm_tool
command. If
lock_gulmd
is not properly stopped, the LOCK_GULM server
may be fenced by the remaining LOCK_GULM servers.
Caution
Shutting down one of multiple redundant LOCK_GULM servers may result in suspension of cluster
operation if the remaining number of servers is half or less of the total number of servers listed in the
cluster.ccs file
(
cluster.ccs:lock_gulm/servers
).
Содержание GFS 5.2.1 -
Страница 1: ...Red Hat GFS 5 2 1 Administrator s Guide...
Страница 8: ......
Страница 14: ...vi Introduction...
Страница 24: ...10 Chapter 1 GFS Overview...
Страница 36: ...22 Chapter 4 Initial Configuration...
Страница 84: ...70 Chapter 6 Creating the Cluster Configuration System Files...
Страница 96: ...82 Chapter 8 Using Clustering and Locking Systems...
Страница 126: ...112 Chapter 10 Using the Fencing System...
Страница 132: ...118 Chapter 11 Using GNBD...
Страница 144: ...130 Appendix A Upgrading GFS...
Страница 184: ...170 Appendix B Basic GFS Examples...
Страница 190: ......
Страница 192: ...178...