Installing on a cluster in WebSphere®
If you are installing Marketing Operations on a cluster in WebSphere®, use the following changes and additions when completing the tasks in chapters 2 through 7.
Preparing the data sources
The data source chapter instructs you to create a database for Marketing Operations and then configure a JDBC data source for it in your application server. When you complete these tasks for a cluster on WebSphere®, note the following additional instructions:
*
The Marketing Operations database must be on a machine that is accessible to all machines in the cluster, but it does not need to be on a machine in the cluster.
*
Installing the products
When you follow the instructions for running the installer, be sure that you install Marketing Platform and Marketing Operations once, on a machine that is accessible to all machines in the Marketing Operations cluster.
You do not need to install the software on each cluster member. Instead, you install it once, create the EAR, and then deploy the EAR file on each cluster member.
Additional pre-deployment steps
Before you deploy Marketing Operations, complete the following tasks in addition to the ones specified in the pre-deployment configuration chapter:
*
Share the top-level directory in which Marketing Operations is installed. For example, say that Marketing Operations is installed in C:\MktOpsCluster\IBM_EMM\MarketingOperations. In this case, you would share the entire MktOpsCluster directory.
*
Create and share a folder on the administration server to contain the upload files for Marketing Operations. This folder is known as the Shared_UMO_Artifacts folder. All cluster members must have full control (read, write, change, and delete) of this folder. Placing this folder below the IBM® home directory in the local file system hierarchy is optional.
Additional deployment steps
In addition to the instructions in the deployment chapter, note the following additions
1. Map modules to servers
When you set the options in the Select Installation Options wizard for WebSphere®, select the cluster and the web server when mapping modules to servers.
2. Additional instructions for the Generic JVM property
Remember to configure the Generic JVM property on each machine in the cluster.
The path you specify in the plan.home and other properties must point to the shared installation directory.
You set the following additional parameters for clusters:
*
*
-Dplan.log.config=\\umoMachine\SharedUnicaHome\MarketingOperations\conf\plan_log4j_client.xml
*
-Dplan.local.log.dir=local_log_dir (where local_log_dir is the writeable folder on the physical machine where Marketing Operations creates logs)
*
-Dplan.suppressNotifications=true
Set this property on every node except one: the one you want to send notifications.
*
To use a different plan_ehcache.xml file in place of the default file defined in the CONF directory for a node, set the -plan_ehcache parameter for that node to specify the location of the file.
Configure session management cookie
You must define the name of the session management cookie used by servers in the cluster. To configure the session management cookie:
1.
In the WebSphere® console, access the properties for a server in the cluster. Navigate to the web container settings and open the session management configuration.
2.
Enable cookies and specify UMOSESSIONID as the cookie name.
3.
Additional post-deployment steps
If you are using a plug-in for load balancing, you must perform the following configuration steps.
*
For IBM® Marketing Operations to work effectively in a clustered environment, users must remain on a single node for their entire session. This option for session management and load balancing is known as session affinity. For more information on configuring your installation to use session affinity, refer to the documentation for your application server.
*
*
Log in to Marketing Operations, select Settings > Configuration, and configure the following URL parameters so that all references to the Marketing Operations server use the proxy host and port.
*
*
*