Chapter 8.
Using Clustering and Locking Systems
This chapter describes how to use the clustering and locking systems available with GFS, and consists
of the following sections:
•
Section 8.1
Locking System Overview
•
Section 8.2
LOCK_GULM
•
Section 8.3
LOCK_NOLOCK
8.1. Locking System Overview
The GFS OmniLock interchangeable locking/clustering system is made possible by the
lock_harness.o
kernel module. The GFS kernel module
gfs.o
connects to one end of the
harness, and lock modules connect to the other end. When a GFS file system is created, the lock
protocol (or lock module) that it uses is specified. The kernel module for the specified lock protocol
must be loaded subsequently to mount the file system. The following lock protocols are available
with GFS:
•
LOCK_GULM — Implements both OmniLock RLM and SLM and is the recommended choice
•
LOCK_NOLOCK — Provides no locking and allows GFS to be used as a local file system
8.2. LOCK_GULM
OmniLock RLM and SLM are both implemented by the LOCK_GULM system.
LOCK_GULM is based on a central server daemon that manages lock and cluster state for all
GFS/LOCK_GULM file systems in the cluster. In the case of RLM, multiple servers can be run
redundantly on multiple nodes. If the master server fails, another "hot-standby" server takes over.
The LOCK_GULM server daemon is called
lock_gulmd
. The kernel module for GFS nodes using
LOCK_GULM is called
lock_gulm.o
. The lock protocol (LockProto) as specified when creating a
GFS/LOCK_GULM file system is called
lock_gulm
(lower case, with no
.o
extension).
8.2.1. Selection of LOCK_GULM Servers
The nodes selected to run the
lock_gulmd
server are specified in the
cluster.ccs
configuration file
(
cluster.ccs:cluster/lock_gulm/servers
). Refer to Section 6.6
Creating the
cluster.ccs
File
.
For optimal performance,
lock_gulmd
servers should be run on dedicated nodes; however, they can
also be run on nodes using GFS. All nodes, including those only running
lock_gulmd
, must be listed
in the
nodes.ccs
configuration file (
nodes.ccs:nodes
).
8.2.2. Number of LOCK_GULM Servers
You can use just one
lock_gulmd
server; however, if it fails, the entire cluster that depends on it
must be reset. For that reason, you can run multiple instances of the
lock_gulmd
server daemon
on multiple nodes for redundancy. The redundant servers allow the cluster to continue running if the
master
lock_gulmd
server fails.
Содержание GFS 5.2.1 -
Страница 1: ...Red Hat GFS 5 2 1 Administrator s Guide...
Страница 8: ......
Страница 14: ...vi Introduction...
Страница 24: ...10 Chapter 1 GFS Overview...
Страница 36: ...22 Chapter 4 Initial Configuration...
Страница 84: ...70 Chapter 6 Creating the Cluster Configuration System Files...
Страница 96: ...82 Chapter 8 Using Clustering and Locking Systems...
Страница 126: ...112 Chapter 10 Using the Fencing System...
Страница 132: ...118 Chapter 11 Using GNBD...
Страница 144: ...130 Appendix A Upgrading GFS...
Страница 184: ...170 Appendix B Basic GFS Examples...
Страница 190: ......
Страница 192: ...178...