Occasionally, you might encounter application placement behavior that is not expected. This topic describes some commonly asked questions and things to look for when application placement is not working as you expect.
To find where the application placement controller is running, you can use the administrative console or scripting. To check the location in the administrative console, click checkPlacementLocation.jacl script to display the server on which the application placement controller is running.
. You also run theThe application placement controller stops a server for the following reasons:
The application placement controller does not display that the server is started for one of the following reasons:
Sometimes the application placement controller does not start a server because the server operation times out. You can configure the amount of time before a timeout occurs in the administrative console. Click Server operation timeout field. If your cell is large, your system is slow, or your system is under high workload, set this field to a higher value. This value represents the amount of time for each server to start, but the timeout occurs based on the number of servers in your cell. For example, if you have five servers and set the value to 10 minutes, then a timeout occurs after 50 minutes.
. Edit theServer memory consumption = 1.2*maxHeapSize + 64 MB
Server memory consumption = .667*resident memory size + .333*virtual memory size
./wsadmin.sh -profile PlacementControllerProcs.jacl -c "showFailedServerStartOperations"
wsadmin>apc = AdminControl.queryNames('WebSphere:type=PlacementControllerMBean,process=dmgr,*')
wsadmin>print AdminControl.invoke(apc,'showFailedServerStartOperations')
When
the server becomes available, the failed to start flag is removed.
You can use the following wsadmin tool command to list the servers
that have the failed to start flag enabled:wsadmin>print AdminControl.invoke(apc,'showFailedServerStartOperations') OpsManTestCell/xdblade09b09/DC1_xdblade09b09
More servers can start than expected when network or communication issues prevent the application placement controller from receiving confirmation that a server started. When the application placement controller does not receive confirmation, it might start an additional server.
The reason for this behaviour can be the fact that the application placement controller runs on multiple servers. This scenario often occurs in mixed topologies, where a WebSphere® Application Server Version 8.5 cell also contains a WebSphere Virtual Enterprise Version 6.1.x node. The application placement controller runs on both nodes: the WebSphere Application Server Version 8.5 node and the WebSphere Virtual Enterprise Version 6.1.x node. WebSphere Application Server Version 8.5 and WebSphere Virtual Enterprise Version 6.1.x nodes use different high availability solutions by default. Therefore, multiple application placement controllers are running. To correct the issue, run the useBBSON.py script on the deployment manager, and restart the cell. The script sets the cell custom properties to ensure that the same high availability solution is used throughout the cell, and that only one application placement controller is started.
You can check the actions of the application placement controller with runtime tasks. To view the runtime tasks, click
. The list of runtime tasks includes tasks that the application placement controller is completing, and confirmation that changes were made. Each runtime task has a status of succeeded, failed, or unknown. An unknown status means that there was no confirmation either way whether the task was successful.For more information about how the application placement controller works with VMware and other hardware virtualization environments, read about virtualization and Intelligent Management and supported server virtualization environments.
If you start or stop a server while the dynamic cluster is in automatic mode, the application placement controller might decide to change your actions. To avoid interfering with the application placement controller when you start or stop a server, put the dynamic cluster into manual mode before you start or stop a server.
The membership policy for a dynamic cluster defines the eligible nodes on which servers can start. From this set of nodes, the application placement controller selects a node on which to start a server by considering system constraints such as available processor and memory capacity. The application placement controller does not determine the server placement based on operating systems.
The application placement controller works with the autonomic request flow manager (ARFM) and defined service policies to determine when to start servers. Service policies set the performance and priority maximums for applications and guide the autonomic controllers in traffic shaping and capacity provisioning decisions. Service policy goals indirectly influence the actions that are taken by the application placement controller. The application placement controller provisions more servers based on information from the ARFM about how much capacity is required for the number of concurrent requests that are being serviced by the ARFM queues. This number is determined based on how much capacity each request uses when it is serviced and how many concurrent requests ARFM determines is appropriate. The number of concurrent requests is based on application priority, goal, and so on.
The performance goals that are defined by service policies are not guarantees. Intelligent Management cannot make your application respond faster than its limit. In addition, more capacity is not provisioned if enough capacity is already provisioned to meet the demand, even if the service policy goal is being breached. Intelligent Management can prevent unrealistic service policy goals from introducing instability into the environment.
You can change the heap size of the server in the dynamic cluster template. For more information, read about modifying the JVM heap size.
You must save dynamic clusters to the master repository before changing the server template. If you have dynamic cluster members that do not inherit the properties from the template, the server template probably incurred changes in an unsaved workspace. To fix this issue, delete the dynamic cluster, then recreate it.
Save your changes to the master repository. You can ensure that your changes are saved to the master repository after clicking Finish, by clicking Save in the message window. Click Save again in the Save to master monfiguration window. Click Synchronize changes with nodes.
Dynamic application placement is based on load distribution, service policy, and available resources. When reducing the maximum number of application instances in a dynamic cluster, the application placement controller stops servers on nodes with highest workload until the number of servers is reduced to the set maximum value. If all the nodes are available, the application placement controller selects the first node in the list, and continues with the next node in the list until the maximum number is met.