WebSphere® Extended Deployment is built
on features of autonomic computing in the dynamic operations environment.
With these features the application server environment can expand
and contract as the business demands. Using the autonomic managers
in the product environment, dynamic operations can make logical decisions
based on business goals.
The dynamic operations environment has the following components:
- Operational policy
- An operational policy is a business, or performance objective
that supports specific goals for specific requests. Operational policies
include service and health policies. A service policy defines a business
goal and an importance and contains one or more transaction classes.
For a given work class, a rule condition maps to a transaction class
which belongs to a service policy. The service policy contains the
business goal requirements and the work class contains the work description
upon which the service policy is applicable. The combination of these
policies is read by the dynamic operations environment to make decisions
on HTTP, SOAP, JMS, and IIOP work
requests.
- Node groups
- Within WebSphere Extended Deployment, the
relationship between applications and the nodes on which they run
is expressed in terms of an intermediate construct called a node
group. A node group is a pool of computing power, inside of which
one or more dynamic clusters is created. The computing power represented
by a node group is divided among its dynamic cluster members. This
distribution of resources is modified autonomically in accordance
with business goals to compensate for changing workload patterns.
- Dynamic clusters
- A dynamic cluster is an application deployment target that is
expandable and contractible, as needed by the dynamic operations environment. Dynamic cluster instances are created on nodes
that are members of the node group that is associated with the dynamic
cluster.
- Autonomic request flow manager
- The autonomic request flow manager (ARFM) has numerous functions:
- Uses queuing of incoming messages in edge-based gateways to provide
computing power overload protection and differentiated service. The
computing resource protected from overload is typically CPU power.
The differentiated service aims to provide the best balanced performance
results among various flows of traffic, relative to the given operational
policy and current offered load.
- Can optionally exert dialog/session-oriented admission control
for the sake of computing power overload protection.
- Sends information to the placement controller about each cluster
to enable the placement controller to optimize the placement for the
operational policy and currently offered load. The information about
a given cluster is the relationship between computing power and service
utility for that cluster.
- On demand router
- On demand router (ODRs) are intelligent HTTP proxies. ODRs are the point of entry into a product environment
and are gateways through which HTTP requests flow to back-end application servers. It can momentarily
queue requests for less important applications in order to allow request
from more important applications to be handled more quickly or to
protect back-end application servers from being overloaded. The ODR
is aware of the current location of a dynamic cluster's server's instances,
so that requests can be routed to the correct endpoint. The ODR can
also dynamically adjust the amount of traffic sent to each individual
server instance based on process utilization and response times. Note
that for an inbound UDP message, the ODR might route the message to another ODR
in order to properly check for and handle UDP retransmission.
- Dynamic workload manager
- The autonomic request flow manager classifies and prioritizes
requests to application servers based on the demand and policies.
The dynamic workload manager then distributes the requests among the
nodes in a node group to balance the work.
- Application placement controller
- The application placement controller is an autonomic manager in
the dynamic operations infrastructure that supports the fluid mobility
of applications within a dynamic cluster. The application placement
controller adds application server instances when the work is more
than can be handled by the current application, and stops application
servers when there are too few requests for the number of started
applications.
- Health controller
- The health controller constantly monitors the defined health policies.
When a condition specified by a health policy is not met in the environment,
the health controller assures that the configured actions are taken
to correct the problem.
- EWLM
- The enterprise workload manager (EWLM) manages sub-goals and resource
allocations for the larger environment which contains WebSphere Extended Deployment.