This section describes some of the topologies (deployment configurations) to consider before you install WebSphere Partner Gateway and its prerequisite software. The topology that you choose should be based on the factors described in the Environment planning section. The topologies described in this section are consolidated topology, split topology, and distributed topology.
In split and distributed toplogies, you must ensure that the shared common folder uses the same mount point and directory structure on all of the machines. An example scenario is if the dbloader, receiver, console are installed on machine A and document manager is installed on machine B. In this scenario, a mapped drive (for example Y:) must be created on machine A. The user must provide this mapped drive when prompted for the location of the shared common folder. On machine B (and all subsequent machines where an instance of document manager is to be installed) the same map (Y:) will need to be created and directed to the shared common folder.
This topology is the simplest one. It consists of a single server running all three WebSphere Partner Gateway components (Receiver, Community Console, and Document Manager). You might also put WebSphere MQ and your RDBMS on the server as well, although these products should be on separate dedicated servers.
The split topology consists of a front-end server containing the Receiver and Community Console components and a back-end server containing the Document Manager component. This topology is an entry level topology for a small production environment and maximizes your software investment. Note that WebSphere MQ and RDBMS can be anywhere, including on these servers. A better implementation is to have them on dedicated servers.
In a split topology, all instances of the three WebSphere Partner Gateway components need to communicate with the same shared file system. If high volume or high availability is not a concern, hosting the storage on the back-end server is an inexpensive solution. A back-end solution is preferable to front-end storage due to performance and security concerns. When this solution is used, the front-end server can use an NFS connection, or an equivalent file sharing solution, to share files with the back-end server.
If you have a large installation and want a highly scalable and highly redundant environment, you will probably create a distributed topology. This topology consists of one or more dedicated servers for each WebSphere Partner Gateway component (Receiver, Community Console, and Document Manager). For example, you can have an environment that requires two Receiver servers for redundancy, four Community Console servers to support a large number of Community Console users, and six Document Managers for document processing. You can scale this topology by adding additional servers for the component that needs to handle a higher level of document processing (Document Manager), users (Community Consoles), or connections (Receivers) as needed.
In a distributed topology, an external NAS device is a good solution to shared storage. It will give the environment a high performance, redundant storage device that is independent of any of the other servers. All servers can make an NFS connection, or equivalent file sharing solution, to the external device. Your RDBMS and WebSphere MQ should be on dedicated servers; their data storage does not have to be on NAS devices.
Once you have decided on a topology, you should consider how to implement the topology to provide redundancy and disaster recovery capabilities. The Pod-based design is a recommended design. In this design, you have a primary production pod. This pod contains all of the WebSphere Partner Gateway components required to handle a production load. There is a secondary production pod, which can also handle the production load, and a load balancer to switch between the two. The secondary production pod provides redundancy. Figure 1 shows how you could implement the two pods.
Another pod capable of handling the production load could be located at your disaster recovery site. The front-end components of all three pods should be identical. However, the back-end components for the disaster recovery pod must be separate from the production components. Therefore, a separate database server, WebSphere MQ server, and shared file system are required. You must implement some form of data synchronization between the production and disaster recovery back-end components. WebSphere Partner Gateway only supports a single active production environment at any given time. You can also add a test pod, which can be a minimum implementation such as the consolidated topology.