Complex Installations

The following describes an installation to a cluster of Solaris Application Servers, with distributed mediation servers, clients, and an Oracle database. Linux cluster installation is similar to Solaris (refer to the operating system documentation for details). In a production environment, such installations must take account of the network security settings and firewalls.

Here are a few things to check before your installation:

• Ensure any time (NTP) server processes are running

• From a command line ping all hosts from each other to ensure connectivity exists. Ping with fully-qualified domain name and with the hostname alone.

• Ensure that the installing user has admin privileges on the database. See Oracle Database Management for more about setting up Oracle.

• See Ports Used for the list of ports that must be open on the firewall for components to communicate with each other. Open the appropriate ports (bidirectionally).

It is often easiest to disable firewalls completely while initially installing and testing distributed installations with firewalls between components.

For Solaris, best practice includes adding this line to the application user’s .profile

. /etc/.dsienv

This means the user sources the environment on login. On Windows, running the oware command on clients creates a bash emulation with the same environment.

Follow these steps for your installation:

1. Make the appropriate Custom installations to all affected machines (Application Servers, mediation servers, clients). You must know the following that you configure:

• Cluster name--For example: my_redcell_cluster is our Application Server cluster.

Some steps below are unnecessary if you plan to simply distribute processing rather than clustering servers.

• Database information--@[Server_name]:[port]:[database name]. For example: @my_server:1521:MYDB.

• If you are clustering servers, you must select which will be the config server. For example: my_server.

You may want these to autostart in a production system. If so, select that option when installing.

2. Install to Oracle as the installing user, first with loaddb, then with oraclepostinstall. Run loaddb -u [dba user] -w [dba password] -s -g. See also Oracle Database Management for more Oracle information.

If you have a stand-alone MySQL database, you must also run oraclepostinstall for that database too.

If you are planning to change the database name, you must change it in owareapps/installprops/lib/installed.properties on the Application Server.

For loaddb to work, you must have $ORACLE_HOME/network/admin/tnsnames.ora file configured on your Application Server. Here is an example configuration:

 

-------------------------------------------------------------------

MYDB =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = [Server_name])(PORT = 1521))

)

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = MYDB)

)

)

-----------------------------------------------------------------------

 

To test DATABASE connectivity from appserver try:

pingdb -u <username> -p <password>

See Oracle RAC Property for more information about installing with Oracle’s failover.

3. Add oware.config.server=[fully qualified primary appserver name] to owareapps/installprops/lib/installed.properties on the Application Servers. Here are the relevant portions of these installed.properties files:

Appserver:

 

#************************************************************

# The following properties override those found in *

# oware/lib/*.properties in order to establish valid *

# properties for this installation. *

#************************************************************

 

oware.config.server=my_server

oware.installed.package.name=RedCell

...

oware.client.partition.name=my_redcell_cluster

oware.local.ip.address=193.35.184.175

oware.mediation.subnet.mask=255.255.255.0

com.dorado.bom_dbms.name=oracle

com.dorado.jdbc.user=redcell

com.dorado.jdbc.password=mypwd

com.dorado.jdbc.database_name.oracle=@my_server:1521:MYDB

...

If you plan to change the database name, you must change the relevant portion of your Application Servers’ installed.properties files. Similarly, if you use a database tool to change the default user’s password, you must change that password in the com.dorado.jdbc.password property.

The config server distributes tasks to members of a cluster. If you are not clustering, then this step is superfluous.

4. If you are clustering mediation servers, you must add the mediation server cluster config server in installprops/medserver/lib/installed.pro­perties on all such servers. This example installation does not cluster mediation servers. We do, however, want to bypass multicast communication with Application Servers, so this file is configured with the oware.applica­tion.servers property. Multicast may be restricted by a firewall,

Medserver:

/installprops/medserver/lib/installed.propertes

 

#************************************************************

# The following properties override those found in *

# oware/lib/*.properties and *

# oware/medserver/lib/*.properties in order to establish *

# valid mediation properties for this installation. *

#************************************************************

 

## This property defines whether mediation listeners should make use of

## of high availablity or not.

## Possible values true - Listener come up in high availability mode(Forwarding/Standby)

## false - Listener come up independently.

com.dorado.mediation.listener.use.high.availability=false

##

##multicast message communication between agents for working in a peer environment.

com.dorado.mediation.listener.multicast.intercomm.address=226.0.0.26

oware.application.servers=193.35.184.175,193.35.184.170

This modification is typical for secure environments where multicast is restricted by a firewall.

If you use the oware.application.servers property, you must (comma-separated) list all available servers wherever you use it to bypass multicast.

An HA “cluster” consists of only two mediation servers. You can have more than two, but that just means more than one standby server exists. If you set the HAproperty to false, all clustered mediation servers are active at the same time, and any number of mediation servers can be in the same cluster.

5. If you are clustering servers, uncomment the SonicMQ lines of oware/lib/owjms.properties. (You must eventually do this on both java clients and servers.)

####

## SonicMQ properties

##

## To enable SonicMQ remove the # from the beginning of these lines.

 

#jms.provider=SONICMQ

#jms.qf=QueueConnectionFactory

#jms.tf=TopicConnectionFactory

#com.dorado.eventchannel.VendorInitFactoryClass.sonicmq=com.dorado.core.jms.OWSonicMQInitFactory

#com.dorado.jms_vendor.port.sonicmq=2506

#com.dorado.eventchannel.protocol.sonicmq=tcp://

This provides the Java messaging service for the cluster. If you are simply distributing processing (Application Server separate from mediation server, for example), this is not necessary.

6. Disable Mediation server on appserver machines by not configuring it during installation. Alternatively, you can put this property in \owareapps\installprops\lib\installed.properties:

oware.appserver.mediation.setup=false

7. If you installed autostarting, configure it in owareapps/installprops/installed.properties. See Startup Properties for a list of potentially configurable properties.

8. During testing you can start application or mediation servers two different ways:

• To test autostart: run /etc/rc2.d/S76oware start as root

• For manual startup (common during testing) run the following as the application’s user:

startappserver -c <config server> -p <cluster> -m <multicast address>

and

startmedserver -a <appserver cluster>

For headless servers (ones without video cards), you must follow the instructions in Headless Application Servers.

See also startappserver / stopappserver Command Lines.