background image

48

Chapter 1. Red Hat Cluster Suite Overview

Persistence Network Mask

To limit persistence to particular subnet, select the appropriate network mask from the

drop-down menu.

1.10.4.2. REAL SERVER Subsection

Clicking on the

REAL SERVER

subsection link at the top of the panel displays the

EDIT

REAL SERVER

subsection. It displays the status of the physical server hosts for a partic-

ular virtual service.

Figure 1-33. The REAL SERVER Subsection

Click the

ADD

button to add a new server. To delete an existing server, select the radio

button beside it and click the

DELETE

button. Click the

EDIT

button to load the

EDIT

REAL SERVER

panel, as seen in Figure 1-34.

Summary of Contents for CLUSTER SUITE - FOR RHEL 4

Page 1: ...Red Hat Cluster Suite for RHEL 4 Overview ...

Page 2: ...ion of substantively modified versions of this material is prohibited without the explicit permission of the copyright holder Distribution of the work or derivative of the work in any standard paper book form for commercial purposes is prohibited unless prior permission is obtained from the copyright holder The content described in this paragraph is copyrighted by Red Hat Inc 2000 2006 Red Hat and...

Page 3: ...Scalability Moderate Price 17 1 5 3 Economy and Performance 18 1 6 Cluster Logical Volume Manager 19 1 7 Global Network Block Device 21 1 8 Linux Virtual Server 22 1 8 1 Two Tier LVS Topology 24 1 8 2 Three Tier LVS Topology 27 1 8 3 Routing Methods 28 1 8 4 Persistence and Firewall Marks 32 1 9 Cluster Administration GUI 33 1 9 1 Cluster Configuration Tool 34 1 9 2 Cluster Status Tool 38 1 10 Lin...

Page 4: ......

Page 5: ...ruc tions Red Hat Enterprise Linux Security Guide Details the planning and the tools involved in creating a secured computing environment for the data center workplace and home This document contains overview information about Red Hat Cluster Suite for Red Hat Enterprise Linux 4 and is part of a documentation set that provides conceptual procedural and reference information about Red Hat Cluster S...

Page 6: ... log file analysis program application This style indicates that the program is an end user application as opposed to system software For example Use Mozilla to browse the Web key A key on the keyboard is shown in this style For example To use Tab completion type in a character and then press the Tab key Your termi nal displays the list of files in the directory that start with that letter key com...

Page 7: ...k button to return to the webpage you last viewed computer output Text in this style indicates text displayed to a shell prompt such as error messages and responses to commands For example The ls command displays the contents of a directory For example Desktop about html logs paulwesterberg png Mail backupfiles mail reports The output returned in response to the command in this case the contents o...

Page 8: ... to draw your attention to certain pieces of information In order of urgency these items are marked as a note tip important caution or warning For example Note Remember that Linux is case sensitive In other words a rose is not a ROSE is not a rOsE Tip The directory usr share doc contains additional documentation for packages installed on your system Important If you modify the DHCP configuration f...

Page 9: ...ease submit a report in Bugzilla http bugzilla redhat com bugzilla against the component rh cs Be sure to mention the document s identifier rh cs RHCS overview EN 4 Print RHI By mentioning this document s identifier we know exactly which version of the guide you have If you have a suggestion for improving the documentation try to be as specific as possible If you have found an error please include...

Page 10: ...vi About This Document ...

Page 11: ...Block Device Section 1 8 Linux Virtual Server Section 1 9 Cluster Administration GUI Section 1 10 Linux Virtual Server Administration GUI 1 1 Cluster Basics A cluster is two or more computers called nodes or members that work together to per form a task There are four major types of clusters Storage High availability Load balancing High performance Storage clusters provide a consistent file system...

Page 12: ...oftware detects the failure and redirects requests to other cluster nodes Node failures in a load balancing cluster are not visible from clients outside the cluster Red Hat Cluster Suite provides load balancing through LVS Linux Virtual Server High performance clusters use cluster nodes to perform concurrent calculations A high performance cluster allows applications to work in parallel therefore ...

Page 13: ...er Suite Red Hat GFS Global File System Provides a cluster file system for use with Red Hat Cluster Suite GFS allows multiple nodes to share storage at a block level as if the storage were connected locally to each cluster node Cluster Logical Volume Manager CLVM Provides volume management of cluster storage Global Network Block Device GNBD An ancillary component of GFS that exports block level st...

Page 14: ...ter refer to Figure 1 2 GULM runs in nodes designated as GULM server nodes cluster management is centralized in the nodes designated as GULM server nodes refer to Figure 1 3 GULM server nodes manage the cluster through GULM clients in the cluster nodes With GULM cluster management operates in a limited number of nodes either one three or five nodes configured as GULM servers The cluster manager ke...

Page 15: ...er cluster nodes When cluster membership changes the cluster manager notifies the other infrastructure components which then take appropriate action For example if node A joins a cluster and mounts a GFS file system that nodes B and C have already mounted then an additional journal and lock management is required for node A to use that GFS file system If a cluster node does not transmit a heartbea...

Page 16: ...MAN as its cluster manager GULM runs in nodes designated as GULM server nodes lock management is centralized in the nodes designated as GULM server nodes GULM server nodes manage locks through GULM clients in the cluster nodes refer to Figure 1 3 With GULM lock management operates in a limited number of nodes either one three or five nodes configured as GULM servers GFS and CLVM use locks from the...

Page 17: ... cluster configuration file define a fencing method fencing agent and fencing device The fencing program makes a call to a fencing agent specified in the cluster configuration file The fencing agent in turn fences the node via a fencing device When fencing is complete the fencing program notifies the cluster manager Red Hat Cluster Suite provides a variety of fencing methods Power fencing A fencin...

Page 18: ...8 Chapter 1 Red Hat Cluster Suite Overview Figure 1 4 Power Fencing Example Figure 1 5 Fibre Channel Switch Fencing Example ...

Page 19: ...ther dual power supplies or multiple paths to storage If a node has dual power supplies then the fencing method for the node must specify at least two fencing devices one fencing device for each power supply refer to Figure 1 6 Similarly if a node has multiple paths to Fibre Channel storage then the fencing method for the node must specify one fencing device for each path to Fibre Channel storage ...

Page 20: ...e first fencing method is not successful the next fencing method specified for that node is used If none of the fencing methods is successful then fencing starts again with the first fencing method specified and continues looping through the fencing methods in the order specified in the cluster configuration file until the node has been fenced 1 3 4 Cluster Configuration System The Cluster Configu...

Page 21: ...r Suite Overview 11 Figure 1 8 CCS Overview Other cluster components for example CMAN access configuration information from the configuration file through CCS refer to Figure 1 8 Figure 1 9 Accessing Configuration Information ...

Page 22: ...uster an application is configured with other cluster resources to form a high availability cluster service A high availability cluster service can fail over from one cluster node to another with no apparent interruption to cluster clients Cluster service failover can occur if a cluster node fails or if a cluster system administrator moves the service from one cluster node to another for example f...

Page 23: ...configured for failover priority Failover Domain 2 priority is configured with Node C as priority 1 Node B as priority 2 and Node D as priority 3 If Node C fails Cluster Service Y fails over to Node B next If it cannot fail over to Node B it tries failing over to Node D Failover Domain 3 is configured with no priority and no restrictions If the node that Cluster Service Z is running on fails Clust...

Page 24: ...ontent a web server application init script etc init d httpd specifying httpd A file system resource Red Hat GFS named gfs content webserver Figure 1 11 Web Server Cluster Service Example Clients access the cluster service through the IP address 10 10 10 201 enabling interaction with the web server application httpd content The httpd content application uses the gfs content webserver file system I...

Page 25: ...resources minimize storage administration costs Manage storage as a whole instead of by partition Decrease overall storage needs by eliminating the need for data replications Scale the cluster seamlessly by adding servers or storage on the fly No more partitioning storage through complicated techniques Add servers to the cluster on the fly by mounting them to the common file system Nodes that run ...

Page 26: ...performance scalability and economy Section 1 5 1 Superior Performance and Scalability Section 1 5 2 Performance Scalability Moderate Price Section 1 5 3 Economy and Performance Note The GFS deployment examples reflect basic configurations your needs might require a combination of configurations shown in the examples 1 5 1 Superior Performance and Scalability You can obtain the highest shared file...

Page 27: ...block storage is presented to network clients as block storage devices by GNBD servers From the perspective of a client application storage is accessed as if it were directly attached to the server in which the application is running Stored data is actually on the SAN Storage devices and data can be equally shared by network client applications File locking and sharing functions are handled by GFS...

Page 28: ...formance Figure 1 14 shows how Linux client applications can take advantage of an existing Ethernet topology to gain shared access to all block storage devices Client data files and file systems can be shared with GFS on each client Application failover can be fully automated with Red Hat Cluster Suite ...

Page 29: ...mon that provides clustering ex tensions to the standard LVM2 tool set and allows LVM2 commands to manage shared storage clvmd runs in each cluster node and distributes LVM metadata updates in a clus ter thereby presenting each cluster node with the same view of the logical volumes refer to Figure 1 15 Logical volumes created with CLVM on shared storage are visible to all nodes that have access to...

Page 30: ...cluster wide locking Figure 1 15 CLVM Overview You can configure CLVM using the same commands as LVM2 or by using the LVM graph ical user interface refer to Figure 1 16 Figure 1 17 shows the basic concept of creating logical volumes from Linux partitions and shows the commands used to create logical volumes ...

Page 31: ...Chapter 1 Red Hat Cluster Suite Overview 21 Figure 1 16 LVM Graphical User Interface Figure 1 17 Creating Logical Volumes ...

Page 32: ... node with GFS and imports a block device exported by a GNBD server A GNBD server runs in another node and exports block level storage from its local storage either directly attached storage or SAN storage Refer to Figure 1 18 Multiple GNBD clients can access a device exported by a GNBD server thus making a GNBD suitable for use by a group of nodes running GFS Figure 1 18 GNBD Overview 1 8 Linux V...

Page 33: ...se starts the lvs daemon and responds to heartbeat queries from the backup LVS router Once started the lvs daemon calls the ipvsadm utility to configure and maintain the IPVS IP Virtual Server routing table in the kernel and starts a nanny process for each configured virtual server on each real server Each nanny process checks the state of one configured service on one real server and tells the lv...

Page 34: ...hanged data across all nodes at a set interval How ever in environments where users frequently upload files or issue database transactions using scripts or the rsync command for data synchronization does not function optimally Therefore for real servers with a high amount of uploads database transactions or similar traffic a three tiered topology is more appropriate for data synchronization 1 8 1 ...

Page 35: ... router to the public network For instance if eth0 is connected to the Internet then multiple virtual servers can be aliased to eth0 1 Alternatively each virtual server can be associated with a separate device per service For example HTTP traffic can be handled on eth0 1 and FTP traffic can be handled on eth0 2 Only one LVS router is active at a time The role of the active LVS router is to redirec...

Page 36: ... a server in its half load in which case it assigns the IP address to the least loaded real server Locality Based Least Connection Scheduling with Replication Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs This algorithm is also for use in a proxy cache server cluster It differs from Locality Based Least Connection Scheduling by mapp...

Page 37: ...s its backup role again The simple two tier configuration in Figure 1 20 is suited best for clusters serving data that does not change very frequently such as static web pages because the individual real servers do not automatically synchronize data among themselves 1 8 2 Three Tier LVS Topology Figure 1 21 shows a typical three tier LVS configuration In the example the active LVS router routes th...

Page 38: ...ghly available server and accessed by each real server via an exported NFS direc tory or Samba share This topology is also recommended for websites that access a central high availability database for transactions Additionally using an active active configura tion with a Red Hat cluster you can configure one high availability cluster to serve both of these roles simultaneously ...

Page 39: ...ting IP address aliased to eth0 1 The NIC for the private network interface has a real IP address on eth1 and has a floating IP address aliased to eth1 1 In the event of failover the virtual interface facing the Internet and the private facing virtual interface are taken over by the backup LVS router simultaneously All the real servers on the private network use the floating IP for the NAT router ...

Page 40: ...al IP addresses of the real servers is hidden from the requesting clients Using NAT routing the real servers can be any kind of computers running a variety oper ating systems The main disadvantage of NAT routing is that the LVS router may become a bottleneck in large deployments because it must process outgoing and incoming requests 1 8 3 2 Direct Routing Direct routing provides increased performa...

Page 41: ...rver to the client which can become a bottleneck under heavy network load While there are many advantages to using direct routing in LVS there are limitations The most common issue with direct routing and LVS is with Address Resolution Protocol ARP In typical situations a client on the Internet sends a request to an IP address Network routers typically send requests to their destination by relatin...

Page 42: ...s In certain situations it may be desirable for a client to reconnect repeatedly to the same real server rather than have an LVS load balancing algorithm send that request to the best available server Examples of such situations include multi screen web forms cookies SSL and FTP connections In those cases a client may not work properly unless the trans actions are being handled by the same server ...

Page 43: ...on GUI This section provides an overview of the cluster administration graphical user interface GUI available with Red Hat Cluster Suite system config cluster The GUI is for use with the cluster infrastructure and the high availability service management components refer to Section 1 3 Cluster Infrastructure and Section 1 4 High availability Service Management The GUI consists of two major functio...

Page 44: ...fer to the cman_tool 8 man page fence_tool Fence Tool Cluster Infrastructure fence_tool is a program used to join or leave the default fence domain Specifically it starts the fence daemon fenced to join the domain and kills fenced to leave the domain For more information about this tool refer to the fence_tool 8 man page clustat Cluster Status Utility High availability Service Management Component...

Page 45: ...hical display in the left panel A triangle icon to the left of a component name indicates that the component has one or more subordinate components assigned to it Clicking the triangle icon expands and collapses the portion of the tree below a component The components displayed in the GUI are summarized as follows Cluster Nodes Displays cluster nodes Nodes are represented by name as subordinate el...

Page 46: ...to any high availability service in the cluster Resources are represented as subordinate elements under Resources Using configuration buttons at the bottom of the right frame below Properties you can create resources when Resources is selected or edit resource properties when a resource is selected Note The Cluster Configuration Tool provides the capability to configure private re sources also A p...

Page 47: ...Chapter 1 Red Hat Cluster Suite Overview 37 Figure 1 25 Cluster Configuration Structure ...

Page 48: ...rt or relocate a high availability service The Cluster Status Tool displays the current cluster status in the Services area and automatically updates the status every 10 seconds To enable a service you can select the service in the Services area and click Enable To disable a service you can select the service in the Services area and click Disable To restart a service you can select the service in...

Page 49: ... Tool Dead The node is unable to participate as a cluster member The most basic cluster software is not running on the node Table 1 2 Members Status Services Status Description Started The service resources are configured and available on the cluster system that owns the service Pending The service has failed on a member and is pending start on another member Disabled The service has been disabled...

Page 50: ...eal IP address followed by 3636 If you are accessing the Piranha Configuration Tool remotely you need an ssh connection to the active LVS router as the root user Starting the Piranha Configuration Tool causes the Piranha Configuration Tool wel come page to be displayed refer to Figure 1 27 Logging in to the welcome page pro vides access to the four main screens or panels CONTROL MONITORING GLOBAL ...

Page 51: ...to reconfigure the Auto update interval be cause the page will update too frequently If you encounter this issue simply click on another panel and then back on CONTROL MONITORING Update information now Provides manual update of the status information CHANGE PASSWORD Clicking this button takes you to a help screen with information on how to change the administrative password for the Piranha Configu...

Page 52: ...private IP The real IP address for an alternative network interface on the primary LVS node This address is used solely as an alternative heartbeat channel for the backup router Use network type Selects select NAT routing The next three fields are specifically for the NAT router s virtual network interface con nected the private network with the real servers NAT Router IP The private floating IP i...

Page 53: ... Router device Defines the device name of the network interface for the floating IP address such as eth1 1 1 10 3 REDUNDANCY The REDUNDANCY panel allows you to configure of the backup LVS router node and set various heartbeat monitoring options Figure 1 30 The REDUNDANCY Panel Redundant server public IP The public real IP address for the backup LVS router ...

Page 54: ...econds If the primary LVS node does not respond after this number of seconds then the backup LVS router node will initiate failover Heartbeat runs on port Sets the port at which the heartbeat communicates with the primary LVS node The default is set to 539 if this field is left blank 1 10 4 VIRTUAL SERVERS The VIRTUAL SERVERS panel displays information for each currently defined virtual server Eac...

Page 55: ...ick its radio button and click the DE ACTIVATE button After adding a virtual server you can configure it by clicking the radio button to its left and clicking the EDIT button to display the VIRTUAL SERVER subsection 1 10 4 1 The VIRTUAL SERVER Subsection The VIRTUAL SERVER subsection panel shown in Figure 1 32 allows you to configure an individual virtual server Links to subsections related specif...

Page 56: ...t descriptive and easily identifiable You can even reference the protocol used by the virtual server such as HTTP Application port The port number through which the service application will listen Protocol Provides a choice of UDP or TCP in a drop down menu Virtual IP Address The virtual server s floating IP address Virtual IP Network Mask The netmask for this virtual server in the drop down menu ...

Page 57: ...e Quiesce server radio button is selected anytime a new real server node comes online the least connections table is reset to zero so the active LVS router routes requests as if all the real servers were freshly added to the cluster This op tion prevents the a new server from becoming bogged down with a high number of connections upon entering the cluster Load monitoring tool The LVS router can mo...

Page 58: ... SERVER subsection link at the top of the panel displays the EDIT REAL SERVER subsection It displays the status of the physical server hosts for a partic ular virtual service Figure 1 33 The REAL SERVER Subsection Click the ADD button to add a new server To delete an existing server select the radio button beside it and click the DELETE button Click the EDIT button to load the EDIT REAL SERVER pan...

Page 59: ...he hostname for the machine so make it descriptive and easily identifiable Address The real server s IP address Since the listening port is already specified for the asso ciated virtual server do not add a port number Weight An integer value indicating this host s capacity relative to that of other hosts in the pool The value can be arbitrary but treat it as a ratio in relation to other real serve...

Page 60: ...ed scripts to check services requiring dynamically changing data Figure 1 35 The EDIT MONITORING SCRIPTS Subsection Sending Program For more advanced service verification you can use this field to specify the path to a service checking script This function is especially helpful for services that require dynamically changing data such as HTTPS or SSL To use this function you must write a script tha...

Page 61: ...open the port and assume the service is running if it succeeds Only one send sequence is allowed in this field and it can only contain printable ASCII characters as well as the following escape characters n for new line r for carriage return t for tab to escape the next character which follows it Expect The textual response the server should return if it is functioning properly If you wrote your o...

Page 62: ...52 Chapter 1 Red Hat Cluster Suite Overview ...

Page 63: ...tion Tool system config cluster Command used to manage cluster configuration in a graphical setting Cluster Logical Volume Manager CLVM clvmd The daemon that distributes LVM metadata updates around a cluster It must be running on all nodes in the cluster and will give an error if a node in the cluster does not have this daemon running lvm LVM2 tools Provides the command line tools for LVM2 system ...

Page 64: ...m configuration files through ccsd ccsd CCS daemon that runs on all cluster nodes and provides configuration file data to cluster software cluster conf This is the cluster configuration file The full path is etc cluster cluster conf Cluster Manager CMAN cman ko The kernel module for CMAN cman_tool This is the administrative front end to CMAN It starts and stops CMAN and can change some internal pa...

Page 65: ...PC power switch fence_bladecenter Fence agent for for IBM Bladecenters with Telnet interface fence_bullpap Fence agent for Bull Novascale Platform Administration Processor PAP Interface fence_wti Fence agent for WTI power switch fence_brocade Fence agent for Brocade Fibre Channel switch fence_mcdata Fence agent for McData Fibre Channel switch fence_vixel Fence agent for Vixel Fibre Channel switch ...

Page 66: ...luster nodes for Distributed Lock Manager DLM support lock_gulmd Server daemon that runs on each node and communicates with all nodes in GFS cluster libgulm so xxx Library for GULM lock manager support gulm_tool Command that configures and debugs the lock_gulmd server GFS gfs ko Kernel module that implements the GFS file system and is loaded on GFS cluster nodes gfs_fsck Command that repairs an un...

Page 67: ... and communicates with the DLM lock manager in Red Hat Cluster Suite lock_gulm ko A lock module that implements GULM locking for GFS It plugs into the lock harness lock_harness ko and communicates with the GULM lock manager in Red Hat Cluster Suite lock_nolock ko A lock module for use when GFS is used as a local file system only It plugs into the lock harness lock_harness ko and provides local loc...

Page 68: ... pulse on the backup LVS router instructs the pulse daemon on the active LVS router to shut down all LVS services starts the send_arp program to reassign the floating IP addresses to the backup LVS router s MAC address and starts the lvs daemon lvsd The lvs daemon runs on the active LVS router once called by pulse It reads the configuration file etc sysconfig ha lvs cf calls the ipvsadm utility to...

Page 69: ...the Web based tool for monitoring configuring and administering LVS This is the default tool to maintain the etc sysconfig ha lvs cf LVS configuration file send_arp This program sends out ARP broadcasts when the floating IP address changes from one node to another during failover Table 2 1 Red Hat Cluster Manager Software Subsystem Components 2 2 Man Pages This section lists man pages that are rel...

Page 70: ...witches fence_mcdata 8 I O Fencing agent for McData FC switches fence_vixel 8 I O Fencing agent for Vixel FC switches fence_sanbox2 8 I O Fencing agent for QLogic SANBox2 FC switches fence_ilo 8 I O Fencing agent for HP Integrated Lights Out card fence_gnbd 8 I O Fencing agent for GNBD based GFS clusters fence_egenera 8 I O Fencing agent for the Egenera BladeFrame fence_manual 8 program run by fen...

Page 71: ...lock Device gnbd_export 8 the interface to export GNBDs gnbd_import 8 manipulate GNBD block devices on a client gnbd_serv 8 gnbd server daemon LVS pulse 8 heartbeating daemon for monitoring the health of cluster nodes lvs cf lvs 5 configuration file for lvs lvscan 8 scan all disks for logical volumes lvsd 8 daemon to control the Red Hat clustering services ipvsadm 8 Linux Virtual Server administra...

Page 72: ...62 Chapter 2 Red Hat Cluster Suite Component Summary ...

Page 73: ...k 30 requirements software 30 routing methods NAT 29 three tiered high availability cluster 27 M members status table 39 N NAT routing methods LVS 29 network address translation see NAT O overview economy 15 performance 15 scalability 15 P Piranha Configuration Tool CONTROL MONITORING 40 EDIT MONITORING SCRIPTS Subsection 49 GLOBAL SETTINGS 41 login panel 40 necessary software 40 REAL SERVER subse...

Page 74: ...64 S services status table 39 T table command line tools 33 tables cluster components 53 members status 39 services status 39 ...

Reviews: