86
Chapter 8. Using Clustering and Locking Systems
For optimal performance,
lock_gulmd
servers should be run on dedicated nodes; however, they can
also be run on nodes using GFS. All nodes, including those only running
lock_gulmd
, must be listed
in the
nodes.ccs
configuration file (
nodes.ccs:nodes
).
8.2.2. Number of LOCK_GULM Servers
You can use just one
lock_gulmd
server; however, if it fails, the entire cluster that depends on it
must be reset. For that reason, you can run multiple instances of the
lock_gulmd
server daemon
on multiple nodes for redundancy. The redundant servers allow the cluster to continue running if the
master
lock_gulmd
server fails.
Over half of the
lock_gulmd
servers on the nodes listed in the
cluster.ccs
file
(
cluster.ccs:cluster/lock_gulm/servers
) must be operating to process locking requests
from GFS nodes. That quorum requirement is necessary to prevent split groups of servers from
forming independent clusters — which would lead to file system corruption.
For example, if there are three
lock_gulmd
servers listed in the
cluster.ccs
configuration file,
two of those three
lock_gulmd
servers (a quorum) must be running for the cluster to operate.
A
lock_gulmd
server can rejoin existing servers if it fails and is restarted.
When running redundant lock_gulmd servers, the minimum number of nodes required is three; the
maximum number of nodes is five.
8.2.3. Starting LOCK_GULM Servers
If no
lock_gulmd
servers are running in the cluster, take caution before restarting them — you
must verify that no GFS nodes are hung from a previous instance of the cluster. If there are hung
GFS nodes, reset them before starting
lock_gulmd
servers. Resetting the hung GFS nodes before
starting
lock_gulmd
servers prevents file system corruption. Also, be sure that all nodes running
lock_gulmd
can communicate over the network; that is, there is no network partition.
The
lock_gulmd
server is started with no command line options.
8.2.4. Fencing and LOCK_GULM
Cluster state is managed in the
lock_gulmd
server. When GFS nodes or server nodes fail, the
lock_gulmd
server initiates a fence operation for each failed node and waits for the fence to complete
before proceeding with recovery.
The master
lock_gulmd
server fences failed nodes by calling the
fence_node
command with the
name of the failed node. That command looks up fencing configuration in CCS to carry out the fence
operation.
When using RLM, you need to use a fencing method that shuts down and reboots a node. With RLM
you
cannot
use any method that does not reboot the node.
8.2.5. Shutting Down a LOCK_GULM Server
Before shutting down a node running a LOCK_GULM server,
lock_gulmd
should be terminated
using the
gulm_tool
command. If
lock_gulmd
is not properly stopped, the LOCK_GULM server
may be fenced by the remaining LOCK_GULM servers.
Содержание GFS 6.0 -
Страница 1: ...Red Hat GFS 6 0 Administrator s Guide...
Страница 8: ......
Страница 88: ...74 Chapter 6 Creating the Cluster Configuration System Files...
Страница 98: ...84 Chapter 7 Using the Cluster Configuration System...
Страница 102: ...88 Chapter 8 Using Clustering and Locking Systems...
Страница 128: ...114 Chapter 9 Managing GFS...
Страница 134: ...120 Chapter 10 Using the Fencing System...
Страница 144: ...130 Chapter 12 Using GFS init d Scripts...
Страница 148: ...134 Appendix A Using Red Hat GFS with Red Hat Cluster Suite...
Страница 184: ...170 Appendix C Basic GFS Examples...
Страница 190: ......