WebSphere® eXtreme Scale is scalable through the use of partitioned data, and can scale to thousands of containers if required because each container is independent from other containers.
WebSphere eXtreme Scale divides data sets into distinct partitions that can be moved between processes or even between machines at run time. You can, for example, start with a deployment of four servers and then expand to a deployment with ten servers as the demands on the cache grow. Just as you can add more physical machines and processing units for vertical scalability, you may extend eXtreme Scale's elastic scaling capability horizontally with partitioning. This is another major difference with in-memory databases (IMDBs) as opposed to eXtreme Scale (which is a data grid), since IMDBs can only scale vertically.
With WebSphere eXtreme Scale, you can also use a set of APIs to gain transactional access this partitioned and optionally distributed data. In terms of performance, the choices you make for interacting with the cache are as significant as the functions to manage the cache for availability.
Partitioning is the mechanism that WebSphere eXtreme Scale uses to scale out an application. Partitioning is the separation of the application state into parts where each part contains some set of complete instance data. Partitioning is not like Redundant Array of Independent Disks (RAID) striping, which slices each instance across all stripes. Each partition hosts the complete data for individual entries. Partitioning is a very effective means for scaling, but is not applicable to all applications. Applications that require transactional guarantees across large sets of data do not scale and cannot be partitioned effectively, so eXtreme Scale does not currently support two-phase commit across partitions.