You can use several different options to control when shards
are placed on various container servers in the configuration. During
startup, you might choose to delay the placement of shards. When you
are running all of your container servers, you might need to suspend,
resume, or change placement while you maintain servers.
Procedure
Controlling placement
during startup
You can control
when shards begin to be placed while your environment is starting.
Some control is in place by default. If you do not take any actions
to control placement, shards begin to be placed immediately. When
shards are placed immediately, the shards might not be placed evenly
as subsequent container servers start, and further placement operations
run to balance the distribution.
- Temporarily
suspend the balancing of shards to prevent
immediate shard placement when your container servers are starting.
Suspending the balancing of shards prevents uneven shard
placement. Before you start your container servers, use the xscmd
-c suspendBalancing command to stop the balancing of shards
for a specific data grid and map set. After the container servers
are started, you can use the xscmd -c resumeBalancing command
to begin the placement of shards on the container servers.
- Configure the placementDeferralInterval property
to minimize the number of shard placement cycles on the container
servers. Shard placement is triggered at the defined time interval.
Set the placementDeferralInterval property
in the server properties file for the catalog server. If you are using
the embedded server API, use the setPlacementDeferralInterval method
on the CatalogServerProperties interface. This
property sets a number of milliseconds before shards are placed on
the container servers. The default value for this property is 15 seconds.
With the default value, when a container server starts, placement
does not start until after the time specified in the property has
passed. If multiple container servers are starting in succession,
the deferral interval timer is reset if a new container server starts
within the given interval. For example, if a second container server
starts 10 seconds after the first container server, placement does
not start until 15 seconds after the second container server started.
However, if a third container server starts 20 seconds after the second
container server, placement has already begun on the first two container
servers.
When container servers become unavailable, placement
is triggered as soon as the catalog server learns of the event so
that recovery can occur as quickly as possible.
You
can use the following tips to help determine if your placement deferral
value is set to the right amount of time:
- As you concurrently
start the container servers, look at the CWOBJ1001 messages
in the SystemOut.log file for each container
server. The timestamp of these messages in each container server log
file indicates the actual container server start time. You might consider
adjusting the placementDeferralInterval property
to include more container server starts. For example, if the first
container server starts 90 seconds before the last container server,
you might set the property to 90 seconds.
- Note how long the CWOBJ1511 messages occur after
the CWOBJ1001 messages. This amount of time can indicate
if the deferral has occurred successfully.
- If you are using
a development environment, consider the length
of the interval when you are testing your application.
- Configure the numInitialContainers attribute.
If you previously used the numInitialContainers attribute,
you can continue using the attribute. However, the use of the xscmd
-c suspendBalancing and xscmd -c resumeBalancing commands
followed by the placementDeferralInterval are
suggested over the numInitialContainers attribute
to control placement. The numInitialContainers attribute
specifies the number of container servers that are required before
initial placement occurs for the shards in this mapSet element. The numInitialContainers attribute
is in the deployment policy descriptor XML file. If you have both numInitialContainers and placementDeferralInterval set,
note that until the numInitialContainers value
has been met, no placement occurs, regardless of the value of the placementDeferralInterval property.
Controlling placement after initial startup
- Force placement to occur.
You can use the xscmd
-c triggerPlacement -g my_OG -ms my_Map_Set command, where my_OG and my_Map_Set are
set to values for your data grid and map set, to force placement to
occur during a point in time at which placement might not occur otherwise.
For example, you might run this command when the amount of time specified
by the placementDeferralInterval property has
not yet passed or when balancing is suspended.
- Reassign a primary shard.
Use the xscmd
-c swapShardWithPrimary command to assign a replica shard
to be the new primary shard. The previous primary shard becomes a
replica.
- Rebalance the primary and replica
shards.
Use
the xscmd -c balanceShardTypes command to adjust
the ratio of primary and replica shards to be equitable among the
running container servers in the configuration. The ratio is consistent
within one shard on each container server.
- Suspend or resume placement.
Use the
xscmd
-c suspendBalancing command or the
xscmd -c resumeBalancing command
to stop and start the balancing of shards for a specific data grid
and map set. When balancing has been suspended, the following placement
actions can still run:
- Shard promotion can occur when container
servers fail.
- Shard role swapping with the xscmd
-c swapShardWithPrimary command.
- Shard placement
triggered balancing with the xscmd -c
triggerPlacement -g myOG -ms myMapSet command.
What to do next
You can monitor
the placement in the environment with the
xscmd
-c placementServiceStatus command.