This section describes the constraints within which a cluster configuration must work.
• All servers in a cluster must be on the same local area network (LAN), be reachable for IP multicast, and must have unique names. Clustering is not designed for servers in different time zones (however, you can have mediation agents send data from distant locations). For an exception to the multicast requirement, and for managing servers and clients outside firewalls, see Disabling Multicast.
To ensure that application server nodes do not miss server-to-server heartbeats that may erroneously initiate fail-over processes, you must connect clustered application servers via a LAN with maximum latency of 100ms. That said, you can put clustered server nodes several miles/kilometers apart, as long as the connection does not exceed the maximum latency (using Fibre ethernet, for example). WAN connectivity is not recommended.
Although high-speed interconnects may be able to increase the distance of application server nodes to over 5km, the Redcell High-Availability solution is not designed for disaster recovery situations. Dorado Software recommends the use of a separate Redcell deployment located at the disaster recovery location to be used as a warm or cold standby system. The database of the primary system should be copied to the disaster recovery standby system on a regularly scheduled interval.
• All cluster members must run the same version of the application and listen on the same port.
• You must identically configure all servers running Enterprise JavaBeans (EJBs) with Java Database Connectivity (JDBC) connection pools.
• For clusters using database connection pools, each cluster member must have an identical connection pool.
• The Access Control Lists (ACLs) and servlets must be identical for every machine serving servlets.
• All cluster members must have identical service configurations. You cannot, for example, turn on mediation services in some cluster members and not others.
• Support exists for only a single application server cluster per system, but you can have several mediation server clusters.
• Database servers can always be separate, no matter how you configure the other machines. You can make a database machine part of an application server cluster/partition, but this is not recommended.
For non-embedded database installations (Oracle) best practice keeps the database servers separate from the other machines.
• Although application servers process more than just mediation, any application server can also run mediation services. Stand-alone mediation servers themselves run only mediation services. You can also turn mediation on and off on any host running an application server.
If you cluster application servers, one or more distributed mediation servers typically handle mediation, and the clustered application servers have mediation turned off.
As stated above, you can also run both the application server and mediation services on a single machine. A cluster of such combined application / mediation machines is also possible--minimally two servers each running an application server with mediation services.
One caveat: Mediation processes have an impact on application server performance. This is why best practice for larger deployments is to distribute mediation services to dedicated mediation servers. Once you distribute mediation, you can ensure better performance by disabling mediation on application servers in the application server cluster.
• Although you can run client applications on application server machines, even though they are clustered, this is not typical or recommended. If you cluster application servers, and clients run on machines outside that cluster; do not identify the client machines as part of the cluster.
• SNMP Mediation agents can back each other up as primary/secondary for any “subnet” (where a subnet is any range of IP addresses). A mediation agent which is primary for one subnet can be secondary for a different one. Such subnets may not overlap.
• Clients are not clustered by the application. If you need to cluster clients (as in a Web deployment), your application must manage the clustering itself.
• For large clustered installations, best practice is to identify one or more workstations for troubleshooting and management alarms from the application itself.