Peer-replicated local in-memory cache

For a local WebSphere eXtreme Scale cache, you must ensure the cache is synchronized if there are multiple processes with independent cache instances. To do so, enable a peer-replicated cache with JMS.

The eXtreme Scale includes two plug-ins that automatically propagates transaction changes between peer eXtreme Scale instances. The JMSObjectGridEventListener plug-in automatically propagates eXtreme Scale changes using Java™ Messaging Service (JMS).
Figure 1. Peer-replicated cache with changes that are propagated with JMS
JMS propagates changes among two ObjectGrid instances that are running in different Java virtual machines. Each ObjectGrid instance is associated with an application.
If you are running a WebSphere® Application Server environment, the TranPropListener plug-in is also available. The TranPropListener plug-in uses the high availability (HA) manager to propagate the changes to each peer eXtreme Scale cache instance.
Figure 2. Peer-replicated cache with changes that are propagated with the high availability manager
The HA manager propagates changes among two ObjectGrid instances that are running in different Java virtual machines. Each ObjectGrid instance is associated with an application.

Advantages

  • The data is more valid because the data is updated more often.
  • With the TranPropListener plug-in, like the local environment, the eXtreme Scale can be created programmatically or declaratively with the eXtreme Scale deployment descriptor XML file or with other frameworks such as Spring. Integration with the high availability manager is done automatically.
  • Each BackingMap can be independently tuned for optimal memory utilization and concurrency.
  • BackingMap updates can be grouped into a single unit of work and can be integrated as a last participant in 2-phase transactions such as Java Transaction Architecture (JTA) transactions.
  • Ideal for few-JVM topologies with a reasonably small dataset or for caching frequently accessed data.
  • Changes to the eXtreme Scale are replicated to all peer eXtreme Scale instances. The changes are consistent as long as a durable subscription is used.

Disadvantages

  • Configuration and maintenance for the JMSObjectGridEventListener can be complex. eXtreme Scale can be created programmatically or declaratively with the eXtreme Scale deployment descriptor XML file or with other frameworks such as Spring.
  • Not scalable: The amount of memory required by the database may overwhelm the JVM.
  • Functions improperly when adding Java virtual machines:
    • Data cannot easily be partitioned
    • Invalidation is expensive.
    • Each cache must be warmed-up independently

When to use

This deployment topology should be used only when the amount of data to be cached is small (can fit into a single JVM) and is relatively stable.