Preparation

The following prepares for clustering.

• See Hardware, Operating Systems and Ports Used for hardware recommendations.

• Cluster licenses.

• As in any distributed system, you must ensure all servers’ system time is synchronized. Application server cluster and mediation server cluster have significant problems operating until time is in sync with NTP.

An example of such problems: the job status messages does not appear when backing up devices. The start time is later than the finish time.

NTP is unnecessary with a standalone server. Otherwise, you must use it when you distribute computing in clusters or when you just distribute Meditation.

Here is a sample ntp client configuration file:

 

# @(#)ntp.client 1.2 96/11/06 SMI

#

# /etc/inet/ntp.client

#

# An example file that could be copied over to /etc/inet/ntp.conf; it

# provides a configuration for a host that passively waits for a server

# to provide NTP packets on the ntp multicast net.

#

#multicastclient 224.0.1.1

 

driftfile /var/ntp/driftfile

 

server apollo

• The partition names for servers in a cluster must be the same, and they must use the same multicast address for inter-cluster communication. You can set these for application servers with an the cluster installation screen that appears whenever you do a Custom installation.

A similar screen asks for the partition name for mediation server installations (one partition for application server, another for mediation server), and lets you select whether to have mediation servers send redundant messages to the cluster or process messages independently. Choose the former for a high availability mediation cluster.

The application and mediation clusters are separate entities, and therefore need different partition names. The cluster name can be an arbitrary string, but must be unique for each.

If you are adding mediation servers after you have created the application server, you can find the partition name in the status bar of Java client, and in the application server shell with each report on application server status.

To set the partition name and intra-cluster multicast address at the command line, use the -p and -m options, respectively. For example:

startappserver -p appcluster -m 225.0.0.1

or

startmedagent -p medcluster -m 225.0.0.2

Notice the mediation server cluster and multicast address are different from the application server’s. Notice also that the address should avoid the 224.0.01 to 225.0.0.0 range. Use 225.0.0.1 and above. See Disabling Multicast for an explanation of how to set up that exception.

Installation now automatically sets the intra-cluster multicast address, so the -m parameter is strictly optional (and must be consistent between hosts, if used).

You can also set the partition name during installation or by setting the oware.client.partition.name property in the installed.properties file. For example:

oware.client.partition.name=appcluster

One further note, you must add -c <Config Server host name> to the command line above if you do not modify or override the oware.config.server property as in the instructions in Step-By-Step Application Server Clustering.

If the Config Server goes down, the cluster can continue, but you cannot add more hosts to the cluster (although recovering hosts can re-add themselves).

• If you are changing existing installations to clusters, you must revise the database settings (if you are doing this with a fresh installation, the installer does it for you). Set the com.dorado.jdbc.database_name.mysql=//localhost:3306/owbusdb property to the same host (not localhost) for all members of a cluster in the owareapps/installprops/lib/instal­led.properties file.

With Oracle, the property to change is com.dorado.jdbc.data­base_name.oracle=@mydbserver1:1521:mydb1sid. See Oracle Database Management below for more information about Oracle’s cluster-related properties.

Command lines always override property file settings.