To configure a deployment policy, use a deployment policy
descriptor XML file.
In the following sections, the elements and attributes
of the deployment policy descriptor XML file are defined. See the deploymentPolicy.xsd file for an example of the deployment
policy XML schema.
deploymentPolicy element
The deploymentPolicy
element is the top-level element of the deployment policy XML file.
This element sets up the namespace of the file and the schema location.
The schema is defined in the
deploymentPolicy.xsd file.
- Number of occurrences: One
- Child element: objectgridDeployment element
objectgridDeployment element
The objectgridDeployment
element is used to reference an ObjectGrid instance from the ObjectGrid
XML file. Within the objectgridDeployment element, you can divide
your maps into map sets.
- Number of occurrences: One or more
- Child element: mapSet element
Attributes- objectgridName
- Specifies the name of the ObjectGrid to deploy. This attribute
references an objectGrid element that is defined in the ObjectGrid
XML file. (Required)
<objectgridDeployment objectgridName="objectgridName"/>
For
example, the objectgridName attribute is set as
CompanyGrid in
the
companyGridDpReplication.xml file. The objectgridName
attribute references the
CompanyGrid that is
defined in the
companyGrid.xml file. For more
information, see
ObjectGrid descriptor XML file.
mapSet element
The mapSet element is used to
group maps together. The maps within a mapSet element are partitioned
and replicated similarly. Each map must belong to only one mapSet.
- Number of occurrences: One or more
- Child element: map element
Attributes- name
- Specifies the name of the mapSet. This attribute must be unique
within the objectgridDeployment element. (Required)
- numberOfPartitions
- Specifies the number of partitions for the mapSet. The default
is 1. The number should be appropriate for the number of containers
that host the partitions. (Optional)
- minSyncReplicas
- Specifies the minimum number of synchronous replicas for each
partition in the mapSet. The default is 0. Shards are not placed until
the domain can support the minimum number of synchronous replicas.
To support the minSyncReplicas value, you need one more container
than the value of minSyncReplicas. If the number of synchronous replicas
falls below the value of minSyncReplicas, write transactions are no
longer allowed for that partition. (Optional)
- maxSyncReplicas
- Specifies the maximum number of synchronous replicas for each
partition in the mapSet. The default is 0. No other synchronous replicas
are placed for a partition after a domain reaches this number of synchronous
replicas for that specific partition. Adding containers that can support
this ObjectGrid can result in an increased number of synchronous replicas
if your maxSyncReplicas value has not already been met. (Optional)
- maxAsyncReplicas
- Specifies the maximum number of asynchronous replicas for each
partition in the mapSet. The default is 0. After the primary and all
synchronous replicas have been placed for a partition, asynchronous
replicas are placed until the maxAsyncReplicas value is met. (Optional)
- replicaReadEnabled
- If this attribute is set to true, read
requests are distributed amongst a partition primary and its replicas.
If the replicaReadEnabled attribute is false,
read requests are routed to the primary only. The default is false.
(Optional)
- numInitialContainers
- Specifies the number of eXtreme Scale containers that are required
before initial placement occurs for the shards in this mapSet. The
default is 1. This attribute can help save CPU and network bandwidth
when bringing an ObjectGrid online. (Optional)
- Starting an eXtreme Scale container sends an event to the catalog
service. The first time that the number of active containers is equal
to the numInitialContainers value for a mapSet, the catalog service
places the shards from the mapSet, provided that minSyncReplicas can
also be satisfied. After the numInitialContainers value has been met,
each container started event can trigger a rebalance of unplaced and
previously placed shards. If you know approximately how many containers
you are going to start for this mapSet, you can set the numInitialContainers
value close to that number to avoid the rebalance after every container
start. Placement does not occur until you reach the numInitialContainers
value for the mapSet.
- autoReplaceLostShards
- Specifies if lost shards are placed on other containers. The default
is true. When a container is stopped or fails, the shards running
on the container are lost. A lost primary has one of its replicas
promoted to be the new primary. Because of this promotion, one of
the replicas is lost. In certain cases, you might not want to have
your lost shards automatically replaced onto available containers.
If you would like lost shards to remain unplaced, set the autoReplaceLostShards
attribute to false. This setting does not affect
the promotion chain, but only the replacement of the last shard in
the chain. (Optional)
- developmentMode
- With this attribute, you can influence where a shard is placed
in relation to its peer shards. The default is true. When the developmentMode
attribute is set to false, no two shards from
the same partition are placed on the same machine. When the developmentMode
attribute is set to true, shards from the same
partition can be placed on the same machine. In either case, no two
shards from the same partition are ever placed in the same container.
(Optional)
- placementStrategy
- There are two placement strategies. The default strategy is FIXED_PARTITION,
where the number of primary shards that are placed across available
containers is equal to the number of partitions defined, increased
by the number of replicas. The alternate strategy is PER_CONTAINER,
where the number of primary shards that are placed on each container
is equal to the number of partitions that are defined, with an equal
number of replicas placed on other containers. (Optional)
<mapSet
(1) name="mapSetName"
(2) numberOfPartitions="numberOfPartitions"
(3) minSyncReplicas="minimumNumber"
(4) maxSyncReplicas="maximumNumber"
(5) maxAsyncReplicas="maximumNumber"
(6) replicaReadEnable="true" | "false"
(7) numInitialContainers="numberOfInitialContainersBeforePlacement"
(8) autoReplaceLostShards="true" | "false"
(9) developmentMode="true" | "false"
(10) placementStrategy="FIXED_PARTITION"|"PER_CONTAINER"
/>
In the following example, the mapSet element
is used to configure a deployment policy. The value is set to
mapSet1,
and is divided into 10 partitions. Each of these partitions must have
at least one synchronous replica available and no more than two synchronous
replicas. Each partition also has an asynchronous replica if the environment
can support it. All synchronous replicas are placed before any asynchronous
replicas are placed. Additionally, the catalog service does not attempt
to place the shards for mapSet1 until the domain can support the minSyncReplicas
value. Supporting the minSyncReplicas value requires two or more containers:
one for the primary and two for the synchronous replica.
<?xml version="1.0" encoding="UTF-8"?>
<deploymentPolicy xmlns:xsi="http://www.w3.org./2001/XMLSchema-instance"
xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd"
xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy">
<objectgridDeployment objectgridName="CompanyGrid">
<mapSet name="mapSet1" numberOfPartitions="10"
minSyncReplicas="1" maxSyncReplicas="2" maxAsyncReplicas="1"
numInitialContainers="10" autoReplaceLostShards="true"
developmentMode="false" replicaReadEnabled="true">
<map ref="Customer"/>
<map ref="Item"/>
<map ref="OrderLine"/>
<map ref="Order"/>
</mapSet>
</objectgridDeployment>
</deploymentPolicy>
Even though only two containers
are required to satisfy the replication settings, the numInitialContainers
attribute requires 10 available containers before the catalog service
attempts to place any of the shards in this mapSet. After the domain
has 10 containers that are able to support the CompanyGrid ObjectGrid,
all shards in mapSet1 are placed.
Because the autoReplaceLostShards
attribute is set to true, any shard in this mapSet that is lost as
the result of container failure is automatically replaced onto another
container, provided that a container is available to host the lost
shard. Shards from the same partition cannot be placed on the same
machine for mapSet1 because the developmentMode attribute is set to false.
Read only requests are distributed across the primary and its replicas
for each partition because the replicaReadEnabled value is true.
map element
Each map in a mapSet element references
one of the backingMap elements that is defined in the ObjectGrid XML
file. Every map in a distributed
eXtreme Scale must belong
to only one mapSet element. See
objectGrid.xsd file for
more information on the ObjectGrid XML file.
- Number of occurrences: One or more
- Child element: None
Attributes- ref
- Provides a reference to a backingMap element in the ObjectGrid
XML file. Each map in a mapSet must reference a backingMap from the
ObjectGrid XML file. The value that is assigned to the ref attribute
must match the name attribute of one of the backingMap elements in
the ObjectGrid XML file. (Required)
<map
(1) ref="backingMapReference"
/>
The
companyGridDpMapSetAttr.xml file
uses the ref attribute on the map to reference each of the backingMaps
from the
companyGrid.xml file.