The following sections describe how to configure, display, enable/disable, modify, relocate, and delete a service, as well as how to handle services which fail to start.
The cluster systems must be prepared before any attempts to configure a service. For example, set up disk storage or applications used in the services. Then, add information about the service properties and resources to the cluster database by using the cluadmin utility. This information is used as parameters to scripts that start and stop the service.
To configure a service, follow these steps:
If applicable, create a script that will start and stop the application used in the service. See the Section called Creating Service Scripts for information.
Gather information about service resources and properties. See the Section called Gathering Service Information for information.
Set up the file systems or raw devices that the service will use. See the Section called Configuring Service Disk Storage for information.
Ensure that the application software can run on each cluster system and that the service script, if any, can start and stop the service application. See the Section called Verifying Application Software and Service Scripts for information.
Back up the /etc/cluster.conf file. See the Section called Backing Up and Restoring the Cluster Database in Chapter 8 for information.
Invoke the cluadmin utility and specify the service add command. The cluadmin utility will prompt for information about the service resources and properties obtained in Step 2. If the service passes the configuration checks, it will be started on the user-designated cluster system, unless the user wants to keep the service disabled. For example:
cluadmin> service add
For more information about adding a cluster service, see the following:
Before creating a service, gather all available information about the service resources and properties. When adding a service to the cluster database, the cluadmin utility will prompt for this information.
In some cases, it is possible to specify multiple resources for a service (for example, multiple IP addresses and disk devices).
The service properties and resources that a user is able to specify are described in the following table.
Table 4-1. Service Property and Resource Information
|Service Property or Resource||Description|
|Service name||Each service must have a unique name. A service name can consist of one to 63 characters and must consist of a combination of letters (either uppercase or lowercase), integers, underscores, periods, and dashes. However, a service name must begin with a letter or an underscore.|
|Preferred member||Specify the cluster system, if any, on which the service will run unless failover has occurred or unless the service is manually relocated.|
|Preferred member relocation policy||When enabled, this policy will automatically relocate a service to its preferred member when that system joins the cluster. If this policy is disabled, the service will remain running on the non-preferred member. For example, if an administrator enables this policy and the failed preferred member for the service reboots and joins the cluster, the service will automatically restart on the preferred member.|
|Script location||If applicable, specify the full path name for the script that will be used to start and stop the service. See the Section called Creating Service Scripts for more information.|
|Disk partition||Specify each shared disk partition used in a service.|
|Mount points, file system types, mount options, NFS export options, and Samba shares|
|Service Check Interval||Specifies the frequency (in seconds) that the system will check the health of the application associated with the service. For example, it will verify that the necessary NFS or Samba daemons are running. For additional service types, the monitoring consists of examining the return status when calling the "status" clause of the application service script. Specifying a value of 0 for the service check interval will disable checking.|
|Disable service policy||If a user does not want to automatically start a service after it is added to the cluster, it is possible to keep the new service disabled until the user enables it.|
The cluster infrastructure starts and stops service to specified applications by running service specific scripts. For both NFS and Samba services, the associated scripts are built into the cluster services infrastructure. Consequently, when running cluadmin to configure NFS and Samba services, do not enter a service script name. For other application types it is necessary to designate a service script. For example, when configuring a database application in cluadmin, specify the fully qualified pathname of the corresponding database start script.
The format of the service scripts conforms to the conventions followed by the System V init scripts. This convention dictates that the scripts have a start, stop, and status clause. These should return an exit status of 0 on success. The cluster infrastructure will stop a cluster service that fails to successfully start. Inability of a service to start will result in the service being placed in a disabled state.
In addition to performing the stop and start functions, service scripts are also used for application service monitoring purposes. This is performed by calling the status clause of a service script. To enable service monitoring, specify a nonzero value for the Status check interval: prompt in cluadmin. If a nonzero exit is returned by a status check request to the service script, then the cluster infrastructure will first attempt to restart the application on the member it was previously running on. Status functions do not have to be fully implemented in service scripts. If no real monitoring is performed by the script, then a stub status clause should be present which returns success.
The operations performed within the status clause of an application can be tailored to best meet the application's needs as well as site-specific parameters. For example, a simple status check for a database would consist of verifying that the database process is still running. A more comprehensive check would consist of a database table query.
The /usr/share/cluster/doc/services/examples directory contains a template that can be used to create service scripts, in addition to examples of scripts. See the Section called Setting Up an Oracle Service in Chapter 5, the Section called Setting Up a MySQL Service in Chapter 5, the Section called Setting Up an Apache Service in Chapter 7, and the Section called Setting Up a DB2 Service in Chapter 5 for sample scripts.
Prior to creating a service, set up the shared file systems and raw devices that the service will use. See the Section called Configuring Shared Disk Storage in Chapter 2 for more information.
If employing raw devices in a cluster service, it is possible to use the /etc/sysconfig/rawdevices file to bind the devices at boot time. Edit the file and specify the raw character devices and block devices that are to be bound each time the system boots. See the Section called Editing the rawdevices File in Chapter 3 for more information.
Note that software RAID, and host-based RAID are not supported for shared disk storage. Only certified SCSI adapter-based RAID cards can be used for shared disk storage.
Administrators should adhere to the following service disk storage recommendations:
For optimal performance, use a 4 KB block size when creating file systems. Note that some of the mkfs file system build utilities default to a 1 KB block size, which can cause long fsck times.
To facilitate quicker failover times, it is recommended that the ext3 filesystem be used. Refer to the Section called Creating File Systems in Chapter 2 for more information.
For large file systems, use the mount command with the nocheck option to bypass code that checks all the block groups on the partition. Specifying the nocheck option can significantly decrease the time required to mount a large file system.
Prior to setting up a service, install any application that will be used in a service on each system. After installing the application, verify that the application runs and can access shared disk storage. To prevent data corruption, do not run the application simultaneously on both systems.
If using a script to start and stop the service application, install and test the script on both cluster systems, and verify that it can be used to start and stop the application. See the Section called Creating Service Scripts for information.