IBM WebSphere eXtreme Scale V7.0 IBM WebSphere eXtreme Scale V7.0 Administration and configuration This presentation will cover WebSphere® eXtreme Scale V7.0 administration and configuration. Agenda Agenda Local cache environment configuration Distributed cache environment configuration Starting and stopping eXtreme Scale servers Monitoring Security The presentation will begin with the configuration of the local cache and the distributed cache WebSphere eXtreme Scale environments. Then administration topics starting and stopping servers, monitoring and security are covered. Configuration options Configuration options ObjectGrid configuration XML files ObjectGrid configuration APIs Inversion of control frameworks such as Spring Combination of the above To cache objects using eXtreme Scale, you must create an ObjectGrid instance within your application. The instance can be created based on configuration data stored in an XML file, configured programmatically. created and enhanced to integrate with control frameworks, or through a combination of these options. Local cache environment Local cache environment Application logic runs in the same JVM as the data in the eXtreme Scale grid or “near cache” Application will only access the local ObjectGrid cache Partitioning and replication do not apply Java™ Virtual Machine Application Local ObjectGrid Cache Thread Thread Thread In the local cache environment, or “near-cache”, the application logic runs in the same JVM as the data in the eXtreme Scale grid. Each application will only access the local ObjectGrid instance to store or retrieve data from its cache. Configuration for local cache environment Configuration for local cache environment Include objectgrid.jar or ogclient.jar on the classpath Create and modify ObjectGrid descriptor xml file Defines ObjectGrids, associated maps, and plug-ins Specifies security attributes for the eXtreme Scale grid (optional) Can contain other application specific information provided by the application developer Developer can provide ObjectGrid descriptor xml file inside the application jar file Developer can specify all of the information through APIs Launch your application Start the ObjectGrid container In a local cache environment, first ensure the objectgrid.jar or objectclient.jar file is including in the classpath. A local ObjectGrid can then be configured by using an ObjectGrid descriptor xml file or ObjectGrid APIs. The configuration elements include the definition of the ObjectGrid, associated maps that are used with the application, many customizable plug-ins, and security attributes. Finally, launch your application and grid container. Simple ObjectGrid descriptor XML file Simple ObjectGrid descriptor XML file This simple ObjectGrid descriptor XML file begins with the required header for each ObjectGrid XML file. It then defines the CompanyGrid ObjectGrid with Customer, Item, OrderLine, and Order BackingMaps. Distributed cache environment Distributed cache environment Data can be partitioned and replicated across multiple grid containers Partitioning and replication configuration information is independent of the specific servers used to host the grid containers Most flexible option Not required to predefine each ObjectGrid server Can add servers to this environment as required In a distributed cache environment, the application logic can run on applications servers separate from the grid servers. In that case, the application servers host a grid client which can communicate with the grid servers to access data from the eXtreme Scale grid. The data stored in the grid is spread across all the JVMs that have WebSphere eXtreme Scale installed and configured. The data in the eXtreme Scale grid can be partitioned and replicated across multiple grid containers to ensure fault tolerance and high availability. Since all the connected grid containers act as a single entity, more JVMs can be added dynamically to increase the storage space available in the grid. Configuration for distributed cache environment Configuration for distributed cache environment Create and modify the ObjectGrid descriptor XML file Developers can specify the descriptor information through APIs Define the deployment policy for the ObjectGrid servers Uses an ObjectGrid deployment definition XML file Start ObjectGrid servers Catalog servers ObjectGrid servers For clients Connect to a catalog server A distributed ObjectGrid cache can be configured by using a ObjectGrid descriptor xml file or ObjectGrid APIs. The configuration elements define each ObjectGrid, any plug-ins or event handlers and the maps within those grids. Optionally, the deployment policy XML file that is compatible with the descriptor xml file can be used to manage the deployment of an ObjectGrid into a dynamic environment. Server topology information is not pre-configured. There are no server names or physical topology information found in the deployment policy. All shards in an eXtreme Scale grid are automatically placed into grid containers by the catalog service. The catalog service uses the constraints defined by the deployment policy to automatically manage shard placement. This allows very large grids to be easily configured. It also allows you to add servers to your environment as needed. Catalog servers must be started first and in parallel if there is more than one. Then ObjectGrid servers and clients are started and communicate with the primary catalog server to connect into the grid. Deployment policy XML file Deployment policy XML file Names the maps belonging to each map set Defines the number of partitions and the number of synchronous and asynchronous replicas Specifies placement behaviors The minimum number of active containers before placement commences Automatic replacement of lost shards Placement of each shard from a single partition onto a different machine The deployment policy XML file specified the maps belonging to each map set, the number of partitions and the number of synchronous and asynchronous replicas. It also dictates these placement behaviors. The minimum number of active containers before placement will commence, the automatic placement of lost shards, and the placement of each shard from a single partition onto a different machine. Distributed cache deployment policy XML file Distributed cache deployment policy XML file “ xsi:schemaLocation=“<…>“ xmlns=“<…>"> The distributed ObjectGrid deployment policy XML is intended to be paired with the corresponding ObjectGrid descriptor XML. The deployment policy illustrated on this slide is compatible with the simple XML shown earlier in this presentation. This deployment policy file begins with the required header for each ObjectGrid XML file. It then defines the CompanyGrid ObjectGrid with one mapSet that is divided into 11 partitions. Each partition must have exactly one synchronous replica which is dictated by the minSynchReplicas and maxSyncreplicas attributes. The inumInitialContainers instructs the catalog service to defer placement until four containers are available to support this ObjectGrid. Customer, Item, OrderLine, and Order BackingMaps are contained in the mapSet. Replication using zones Replication using zones JVMs tagged with zone identifier Deployment file includes zone rules Associated with shard type Specifies zones a shard can be placed in Rule name List of zones Inclusive or exclusive flag Can guarantee primary and replica shards not placed on same box Normally avoided, but can happen if multiple NICs Zoning allows more control over how eXtreme Scale places shards in a grid and can guarantee primary and replica shards are not place in the same physical location. To use zoning, JVMs that host an ObjectGrid sever must be tagged with a zone identifier. The deployment file can then include one or more zone rules. The zone rules are associated with a shard type, such as primary, synchronous replica or asynchronous replica. The rule also specifies the possible set of zones a shard can be placed in. The inclusive flag means that once a shard is place in a zone from the list then all other shards will also be placed in that zone. An exclusive setting means that each shard for a partition is placed in a different zone in the zone list. This means if there are three shards (primary and two synchronous replicas) then the zone list must have three zones in it. Placing shards in zones Placing shards in zones zoneMetaData sub-element zoneRule sub-element name exclusivePlacement Two shards from the same partition will not be placed on machines in the same zone zone sub-element name Name that a set of ObjectGrid servers were started with using the –zone option, or name of a zone Node Group shardMapping sub-element shard One of “P”, ”S” , or ”A” (Primary, Synch replica, Asynch replica) Not all are required zoneruleRef Name of the zonerule used to place shards of this type To configure zoning, the elements listed on this slide can be included in the ObjectGrid deployment policy xml file to define the rules for placing shards in zones. Defining zones Defining zones Stand-alone “-Zone” parameter on startOgServer When a zone name is not passed to a container, it is assumed that the customer does not want to use zones It is an error to launch one container with a zone name and another container without a zone name if the containers are in the same domain Embedded Node group name: ReplicationGroup Nodes must not overlap other zoned node groups When you use the startOgServer script to start a stand-alone server, specify the –zone parameter to specify the zone to use for all containers within the server. If a zone name is not passed, then it is assumed that the customer does not want to use zones. If two containers reside in the same domain and one container is launched with a zone name while the other is not, then you will receive an error. For embedded servers, zones are identified by node group membership. Node groups can be named using the convention "ReplicationGroupZONENAME". Any nodes in such a group are members of the zone "ZONENAME". Care needs to be taken to ensure that these cluster members are not a member of two or more such node groups. A cluster member JVM checks for zone membership at start only. Adding a new node group or changing the membership will only have an impact on newly started or restarted JVMs. Sample element Sample element This ObjectGrid deployment policy XML file contains zone rule information, including the zone names, shard type, and whether to use exclusive placement or not. Starting WebSphere embedded servers: distributed environment Starting WebSphere embedded servers: distributed environment Start the catalog server(s) from administrative console or command line Starting an ObjectGrid server Install an application with two files in the META-INF directory of one of the modules objectGrid.xml objectGridDeployment.xml Start the ObjectGrid server(s) from administrative console or command line Contacts the configured catalog server Deployment manager, or one of the catalog servers specified by the cell’s catalog.services.cluster custom property Launching a catalog service is the first step in bringing a distributed cache environment online. Start the catalog server or catalog server cluster from either the administrative console or command line like you would a typical WebSphere server or cluster. Stand-alone example Stand-alone example Start a catalog server startOgServer.sh catalogA -catalogServiceEndpoints catalogA:MyServer1.company.com:12600:12601 Start an ObjectGrid server startOgServer.sh containerA –objectgridFile objectGrid.xml –deploymentPolicyFile objectGridDeployment.xml –catalogServiceEndpoints MyServer1.company.com:12602 –jvmargs –cp application.jar In this example, a server named catalogA on MyServer1.company.com is started. The server’s name is the first argument that is passed to the script and it will become the primary catalog server. During initialization of catalogA, the catalogServiceEndpoints are examined to determine which ports are allocated for this process. Other catalog servers can be specified here, so catalogA would know what other servers to accept connections from if a catalog server cluster existed. Next, an ObjectGrid server named containerA is started. The ObjectGrid descriptor file, objectgrid.xml, and the deployment policy objectGridDeployment.xml are defined. When the server’s grid container is started, it will publish its deployment policy to the catalog service specified in the cataServiceEndpints parameter. The catalog service will examine objectGridDeployment.xml to set up partitioning and replication for the eXtreme Scale grid. The –jvmargs option is used to place the application.jar file on the classpath. Starting stand-alone catalog server Starting stand-alone catalog server /bin/startOgServer [catalogServer|] Catalog server options: -catalogServiceEndPoints -quorum true|false -heartbeat 0|1|-1 -clusterFile -clusterUrl -clusterSecurityFile -clusterSecurityUrl -domain -listenerHost -listenerPort -serverProps -JMXServicePort Launching a catalog service is the first step in bringing a distributed cache environment online. In the stand-alone environment, the process is started using the startOgServer script. The parameter options are listed here and are also described in detail in the eXtreme Scale information center. To start a catalog server cluster, specify use the –catalogServiceEndPoints option. You must start all clustered catalog servers in parallel. Starting a stand-alone server Starting a stand-alone server Starting the ObjectGrid server startOgServer.sh -catalogServiceEndPoints -objectgridFile [options] Options: -deploymentPolicyFile -deploymentPolicyUrl -listenerHost default: localhost -listenerPort default: 2809 -serverProps -zone -traceSpec -traceFile -timeout -script