Configuring the core group bridge between core groups that are in different cells
Use this task to configure communication between core groups that are in different cells.
Before you begin
- You have two or more core groups that are in different cells. A core group is a statically defined component of the high availability manager.
- Any cell that uses core group bridges to connect to core groups in other cells have a name that is unique when compared to the names of the other cells.
About this task
A core group bridge should be used in situations where the availability status of servers in different cells needs to be shared across all of those cells. For example, you might have a situation where a WebSphere proxy server needs the ability to route requests to servers in other cells.
You can use core group bridge custom properties to set up advanced configurations for a core group bridge.

- Whenever a change is made in core group bridge configuration, including the addition of a new bridge, or the removal of an existing bridge, you must fully shutdown, and then restart all core group bridges in the affected access point groups.
- There must be at least one running core group bridge in each core group. If you configure two bridges in each core group, a single server failure does not disrupt the bridge functionality. Also, configuring two bridges enables you to periodically cycle out one of the bridges. If all the core group bridges in a core group are shutdown, the core group state from all foreign core groups is lost.

- Core group bridges be configured in their own dedicated server process, and that these processes have their monitoring policy set for automatic restart.
- For each of your core groups, you set the IBM_CS_WIRE_FORMAT_VERSION core group custom property to the highest value that is supported on your environment.
- To conserve resources, do not create more than two core group bridge interfaces when you define a core group access point. You can use one interface for workload purposes and another interface for high availability. Ensure that these interfaces are on different nodes for high availability purposes. For more information, see the frequently asked question information on core group bridges.
- You should typically specify ONLY two bridge interfaces per core group. Having at least two bridge interfaces is necessary for high availability. Having more than two bridge interfaces adds unnecessary overhead in memory and CPU.

- Make two access point groups the same name.
- If the names of two access point groups are different, set the member communication key of either access point group to the name of the other access point group.
To configure a core group bridge between core groups in different cells, complete the following procedure for each of the cells in your configuration.
Procedure
Results
You configured the core group bridge between core groups that are in different cells.
The following figure illustrates the resulting core group bridge between the two core groups that are in two different cells. Each cell has a defined access point group that contains one core group access point for the core group that is in the cell and a peer access point for the other cell.

Example
- The two cells are referred to as the primary cell and the remote cell.
- wasdmgr02/dmgr/DCS is the name of the deployment manager on the primary cell, and wasdmgr02/dmgr/DCS is the name of the deployment manager on the remote cell.
- wasna01/nodeagent/DCS is the name of a node on both the primary cell and the remote cell.
- CGAP_1/DefaultCoreGroup is the name of the core groups on both the primary cell and the remote cell.
- Using the administrative console for the primary cell, click
- Select CGAP_1/DefaultCoreGroup. and then click Show Detail.
- Select Bridge interfaces, and then click New.
- In the Bridge interfaces field, select the deployment manager, wasdmgr02/dmgr/DCS, from the list of available bride interfaces, and then click OK.
- Click New to create a second bridge interface.
- In the Bridge interfaces field, select a node agent, such as wasna01/nodeagent/DCS, and then click OK to save your changes.
- Go to the administrative console for the remote cell, and click .
- Select CGAP_1/DefaultCoreGroup. and then click Show Detail.
- Select Bridge interfaces, and then click New.
- In the Bridge interfaces field, select the deployment manager, wasdmgr03/dmgr/DCS, from the list of available bride interfaces, and then click OK.
- Click New to create a second bridge interface.
- In the Bridge interfaces field, select the node agent, wasna01/nodeagent/DCS, from the list of available bride interface, and then click OK to save your changes.
- Save your changes.
- Gather the following information for the remote cell:
- The DCS port for the deployment manager. Click , and write down the port number for DCS_UNICAST_ADDRESS. In this example, the DCS port for the deployment manager is 9353.
- The DCS port for the wasna01 node agent. Click , and write down the port number for DCS_UNICAST_ADDRESS. In this example, the DCS port for the node agent is 9454.
- The name the of the core group in the cell to which the Enterprise Javabeans (EJB) cluster belongs. Click , verify that your servers are members of the DefaultCoreGroup core group, and then write down the core group name. In this example the core group name is DefaultCoreGroup.
- The name of the cell. Click Name field. In this example, the name of the name of the cell is wascell03. , and then write down the name that displays in the
- The name of the core group access point. Click DefaultAccessPointGroup field, and write down the name of the core group access point that displays when you expand Core Group DefaultCoreGroup. In this example the name of the core group access point is CGAP_1. , expand the
- Go back to the administrative console for the primary cell and
gather the same information about the primary cell. In this example:
- The DCS port for the deployment manager on the primary cell is 9352.
- The DCS port for the wasna01 node agent on the primary cell is 9353.
- The name of the core group in the cell to which the EJB cluster belongs is DefaultCoreGroup.
- the name of the cell is wascell02.
- The name of the core group access point is CGAP_1.
- Create a new peer access point that points to the remote cell.
In the primary cell administrative console, click
- Click New to start the Create new peer access point wizard.
- Specify the name of the new peer access point, RemoteCellGroup, in the Name field, wascell03 in the Remote cell name field, DefaultCoreGroup in the Remote cell core group name field, and CGAP_1 in the Remote cell core group access point name field.
- Click Next, and then select either Use peer ports or Use a proxy peer access point. For this example, we select Use peer ports , and specify washost02 in the Host field, and 9353 in the Port field. These values are the host name and DCS port number for the deployment manager on the remote cell.
- Click Next, confirm that the information that you specified for the new peer access point is correct, and then click Finish.
. - Create a second peer access point for the node agent.
- Select the peer access point that you just created, RemoteCellGroup/wascell03/DefaultCoreGroup/CGAP_1, and then click Show Detail.
- In the Peer addressability section, select Peer ports, and then click Peer ports > New.
- Specify washost04 in the Host field, and 9454 in the Port field. These values are the host name and DCS port number for the node agent on the remote cell.
- Click OK and then click Save to save your changes to the master configuration.
- Go to remote cell administrative console, and click
- Specify the name of the new peer access point, PrimaryCellGroup, in the Name field, wascell02 in the Remote cell name field, DefaultCoreGroup in the Remote cell core group name field, and CGAP_1 in the Remote cell core group access point name field.
- Click Next, and then select either Use peer ports or Use a proxy peer access point. For this example, we select Use peer ports , and specify washost01 in the Host field, and 9352 in the Port field. These values are the host name and DCS port number for the deployment manager on the primary cell.
- Click Next, confirm that the information that you specified for the new peer access point is correct, and then click Finish.
to start the Create new
peer access point wizard and create peer access points in the remote
cell. - Create a second peer access point for the node agent on the primary
cell.
- Select the peer access point that you just created, PrimaryCellGroup/wascell02/DefaultCoreGroup/CGAP_1, and then click Show Detail.
- In the Peer addressability section, select Peer ports, and then click Peer ports > New.
- Specify washost03 in the Host field, and 9353 in the Port field. These values are the host name and DCS port number for the node agent on the primary cell.
- Click OK and then click Save to save your changes to the master configuration.
- Restart both cells.