Setting up useful resource groups is essential to making full and efficient use of cluster capabilities. Following the steps below, you create resource groups that set aside specific hosts for management duties and divvy up the remainder of your hosts based on maximum memory.
Resource groups are logical groups of hosts. Resource groups provide a simple way of organizing and grouping resources (hosts) for convenience; instead of creating policies for individual resources, you can create and apply them to an entire group. Groups can be made of resources that satisfy a specific static requirement in terms of OS, memory, swap space, CPU factor, and so on, or that are explicitly listed by name.
The cluster administrator can define multiple resource groups, assign them to consumers, and configure a distinct resource plan for each group. For example:
Define multiple resource groups: A major benefit in defining resource groups is the flexibility to group your resources based on attributes that you specify. For example, if you run workload units or use applications that need a Linux OS with not less than 1000 MB of maximum memory, then you can create a resource group that only includes resources meeting those requirements.
Configure a resource plan based on individual resource groups: Tailoring the resource plan for each resource group requires you to complete several steps. These include adding the resource group to each desired top-level consumer (thereby making the resource group available for other sub-consumers within the branch), along with configuring ownership, enabling lending/borrowing, specifying share limits and share ratio, and assigning a consumer rank within the resource plan.
Resource groups are either specified by host name or by resource requirement using the select string.
By default, EGO comes configured with three resource groups: InternalResourceGroup, ManagementHosts, and ComputeHosts. InternalResourceGroup and ManagementHosts should be left untouched, but ComputeHosts can be kept, modified, or deleted as required.
You need to know which hosts you have reserved as management hosts. You identified these hosts as part of the installation and configuration process. If you want to select different management hosts than the ones you originally chose, you must uninstall and then reinstall EGO on the compute hosts that you now want to designate as management hosts (a master host requires installing the full package), and then run egoconfig mghost. The tag mg is assigned to the new management host, in order to differentiate it from a compute host. The hosts you identify as management hosts are subsequently added to the ManagementHosts resource group.
Management hosts run the essential services that control and maintain your cluster and you therefore need powerful, stable computers that you can dedicate to management duties. Note that management hosts are expected to run only services, not to execute workload units.
Ensure that you designate one of your managements host as the master host, and another one or two hosts as failover candidates to the master (the number of failover candidates is up to you, and may depend on the size of your production cluster).
The ManagementHosts resource group is created during the installation and configuration process. Each time you install and configure the full package on a host, that host is statically added to the ManagementHosts resource group.
You need to ensure that the trusted hosts you identified in the section Gather the facts (above) are the same as the hosts that were configured to be management hosts.
You must be logged on to the Platform Management Console as a cluster administrator. You should not be running any workload units while you perform this task; when you delete a resource group, those hosts are no longer assigned to a consumer. You should complete this task before changing your resource plan for the first time. If you have modified the resource plan and want to save those changes, export the resource plan before starting this task.
You can create resource groups that automatically place all your compute hosts in two (or more) different resource groups. You can split your hosts up this way if some of the applications or workload units you plan to run on the EGO cluster have distinct or important memory requirements.
You can logically group hosts into resource groups based on any criteria that you find important to the applications and workload units intend to run. For example, you may wish to distinguish hosts based on OS type or CPU factor.
Now that you have basic resource groups (one for your management hosts and two for your compute hosts) you can begin to specialize and split up one resource group that is based on available memory.
For example, if you know that an application you run requires not only machines with 1001 MB of available memory or more, but also two or more CPUs, you can create a new resource group (and then modify the existing “maxmem_high” resource group) to make these specific resources available to any consumer. The new resource group “maxmemhighmultiCPU” would have the selection string:
select(!mg && maxmem > 1000 && ncpus>=2)
You would then modify the existing resource group “maxmem_high” to read:
select(!mg && !(ncpus>=2) && maxmem > 1000)
As a result, the maxmem_high group uses only single CPU hosts.