Installation Summary

This is a summary of the steps to take when creating a small production cluster.

This book describes how to build a production-ready cluster. A specific example is provided as a guide. If you understand this example, you can adjust the procedures as required at your site to build your own cluster.

Sample production cluster data

Read this example to understand how to create a small cluster that can grow into a regular production cluster with master host failover.

For simplicity, our cluster uses the following default settings:
  • The cluster administrator is egoadmin

  • The cluster name is cluster1

  • The installation directory is /opt/ego

  • The communication ports are 7869-7873

  • The web server ports are 8080, 8005, and 8009

  • The service director (

    sddnsserv
    ) port is 53

  • The web service gateway (

    wsg
    ) port is 9090

Our cluster has the following custom characteristics:

  • All management hosts have access to a shared file system

  • All hosts run Linux

  • All hosts are binary compatible

  • Master host failover is enabled

Our cluster has the following hosts:

  • HostM, the master host

  • HostF, the file server host for the shared file system

  • HostC, a compute host

  • HostD, a management host that is a master candidate host

Additionally, our cluster uses a database host that is not part of the cluster.

We will set up the file server and the master host and test that necessary services have started. We will add a compute host to the cluster. We will add a master candidate host and enable failover in the cluster. Once these important steps are done, we can expand the cluster by adding more management or compute hosts.

Sample production cluster installation

  1. Plan and prepare your cluster (see Plan Your Cluster).

    1. Create the cluster administrator account (egoadmin)

    2. On HostF (the file server host), prepare the shared configuration directory.

      For example:

      /share/ego

    3. On HostM and HostD (the management hosts), free the web server ports, 8080, 8005, and 8009; the service director port, 53; and the web service gateway port, 9090.

    4. On all hosts in the cluster (HostM, HostC, HostD):

      1. Free the communication ports.

        For example, free ports 7869-7872.

      2. Make sure the installation directory is available.

        For example, /opt/ego is empty or does not exist.

    5. On the database host, create the database schema.

  2. Install the master host (see Install the Master Host)

    The following is a summary of what to do on HostM (the master host):

    1. Run the RPM package as root, taking defaults.

    2. Configure automatic startup (optional).

    3. Grant root privileges to

      egoadmin
      (optional).

    4. Log on as egoadmin.

    5. Configure the master host to join the cluster.

    6. License the cluster.

    7. Define the master host as a management host.

    8. Install the database driver.

    9. Configure the database connection.

    10. Start and test the master host.

    11. Test the web server.

    12. Check the reporting services.

  3. Install a compute host (see Install a Compute Host)

    The following is a summary of what to do on HostC (a compute host):

    1. Run the RPM package as root, taking defaults.

    2. Configure automatic startup (optional).

    3. Grant root privileges to

      egoadmin
      (optional).

    4. Log on as egoadmin.

    5. Configure the compute host to join the cluster.

    6. Start the compute host.

      The cluster should be able to run work.

  4. Install a management host that is a master candidate host (see Install a Management Host and Enable master host failover)

    The following is a summary of what to do on HostD (a master candidate host):

    1. Run the RPM package as root, taking defaults.

    2. Configure automatic startup (optional).

    3. Grant root privileges to

      egoadmin
      (optional).

    4. Log on as egoadmin.

    5. Configure the master candidate host to join the cluster.

    6. Configure the master candidate host as a management host.

      HostD is now a management host, but is not a master candidate host yet.

    7. Start and test the master candidate host.

    8. Use the web server to configure the master list for failover.

      HostD is now a master candidate host. If HostM fails, HostD should take over as master.