Chapter 1.
GFS Overview
GFS is a cluster file system that provides data sharing among Linux-based computers. GFS provides
a single, consistent view of the file system name space across all nodes in a cluster. It allows appli-
cations to install and run without much knowledge of the underlying storage infrastructure. GFS is
fully compliant with the IEEE POSIX interface, allowing applications to perform file operations as if
they were running on a local file system. Also, GFS provides features that are typically required in
enterprise environments, such as quotas, multiple journals, and multipath support.
GFS provides a versatile method of networking your storage according to the performance, scalability,
and economic needs of your storage environment.
This chapter provides some very basic, abbreviated information as background to help you understand
GFS. It contains the following sections:
•
Section 1.1
New and Changed Features
•
Section 1.2
Performance, Scalability, and Economy
•
Section 1.3
GFS Functions
•
Section 1.4
GFS Software Subsystems
•
Section 1.5
Before Configuring GFS
1.1. New and Changed Features
New for this release is GNBD (Global Network Block Device) multipath. GNBD multipath config-
uration of multiple GNBD server nodes (nodes that export GNBDs to GFS nodes) with redundant
paths between the GNBD server nodes and storage devices. The GNBD servers, in turn, present mul-
tiple storage paths to GFS nodes via redundant GNBDs. With GNBD multipath, if a GNBD server
node becomes unavailable, another GNBD server node can provide GFS nodes with access to storage
devices.
With GNBD multipath, you need to take into account additional factors — one being that Linux page
caching needs to be disabled on GNBD servers in a GNBD multipath configuration. For information
about CCS files with GNBD multipath, refer to Section 6.4
GNBD Multipath Considerations
. For
information about using GNBD with GNBD multipath, refer to Section 11.1
Considerations for Using
GNBD Multipath
.
For upgrade instructions, refer to Appendix A
Upgrading GFS
.
1.2. Performance, Scalability, and Economy
You can deploy GFS in a variety of configurations to suit your needs for performance, scalability, and
economy. For superior performance and scalability, you can deploy GFS in a cluster that is connected
directly to a SAN. For more economical needs, you can deploy GFS in a cluster that is connected to a
LAN with servers that use the GFS VersaPlex architecture. The VersaPlex architecture allows a GFS
cluster to connect to servers that present block-level storage via an Ethernet LAN. The VersaPlex
architecture is implemented with GNBD (Global Network Block Device), a method of presenting
block-level storage over an Ethernet LAN. GNBD is a software layer that can be run on network
nodes connected to direct-attached storage or storage in a SAN. GNBD exports a block interface from
those nodes to a GFS cluster.
Summary of Contents for GFS 5.2.1 -
Page 1: ...Red Hat GFS 5 2 1 Administrator s Guide...
Page 8: ......
Page 14: ...vi Introduction...
Page 24: ...10 Chapter 1 GFS Overview...
Page 36: ...22 Chapter 4 Initial Configuration...
Page 84: ...70 Chapter 6 Creating the Cluster Configuration System Files...
Page 96: ...82 Chapter 8 Using Clustering and Locking Systems...
Page 126: ...112 Chapter 10 Using the Fencing System...
Page 132: ...118 Chapter 11 Using GNBD...
Page 144: ...130 Appendix A Upgrading GFS...
Page 184: ...170 Appendix B Basic GFS Examples...
Page 190: ......
Page 192: ...178...