Administration Guide


High Speed Interconnection Using VI

Virtual Interface (VI) Architecture is the inter-node communication protocol alternative to TCP/IP in a Windows NT massively parallel processing (MPP) environment. VI is a new communication architecture that was developed jointly by Intel, Microsoft, and Compaq to improve performance over a System Area Network (SAN). Refer to http://www.viarch.org for more information on the architecture.

Products exist which may be acquired separately from DB2 Universal Database that have a VIA-enabled network interface card (NIC), switch, and software driver implementation. Several Independent Hardware Vendors (IHVs) have released, or plan to release, such products.

VI Architecture has low latency, high bandwidth, and lower CPU consumption when compared to TCP/IP. In a communication-intensive environment, using VI Architecture improves the overall system throughput. The greater the number of nodes in the cluster, and the greater the amount of data transferred, the greater the benefit from using VI Architecture.

DB2 Universal Database supports VI Architecture implementations that comply with the Virtual Interface Architecture Specification, Version 1.0; the Intel Virtual Interface (VI) Architecture Developers' Guide, Version 1.0; and pass the "Virtual Interface Architecture Conformance Suite". The specification is found at http://www.intel.com/design/servers/vi/the_spec/specification.htm on the Web. The Developer's Guide is found at http://www.intel.com/design/servers/vi/developer/ia_imp_guide.htm on the Web. Information on the conformance suite is also found at this same URL.

IBM announced support for Virtual Interface (VI) Architecture with DB2 Universal Database EEE V5.2.

To find out about other products adhering to VI Architecture and supported by DB2 Universal Database EEE, please contact the DB2 Universal Database support organization at http://www.software.ibm.com/data or call 1-800-237-5511 (only in the U.S.A. and Canada).

The products that have been tested with DB2 Universal Database include:

There may be other products that work with DB2 Universal Database. Check with the vendor of that product, and with IBM Service and Support, to ensure that the other product is supported.

Virtual Interface (VI) Hardware Setup

Examples of the prerequisites for the network hardware setup using VI are:

You must configure DB2 to use VI. Enabling DB2 to Run Using VI has the necessary information for you to use VI.

Setup Procedure for GigaNet Interconnect

The list of the hardware and software required to setup this environment include the following products:

The steps required to ensure that GigaNet Interconnect can work with DB2 Universal Database are shown below. Each step is a summary of what is required at each step: all of the details associated with each step are not presented here. You should also use the referenced documentation at each step which does provide detailed instructions and direction needed.

Each GigaNet GNN1000 is packaged with a GigaNet cLAN Software CD-ROM. The CD-ROM contains all of the necessary software to set-up the GigaNet Interconnect. In addition, the CD-ROM also contains the VI Architecture SDK and Adobe Acrobat Reader. These two items are only needed by those individuals that are developing VI-enabled applications.

Summary of steps:

  1. Install Adapter Cards
  2. Install Switches and Cables
  3. Install Adapter Drivers
  4. Install cLAN Management Console
  5. Test the Interconnect

Here are the steps:

  1. Install the GigaNet GNN1000 Network Interface Card. Please refer to the GigaNet GNN1000 User Guide for installation instructions.
  2. Install the GigaNet GNX5000 Switch and Cables. Please refer to the GigaNet GNX5000 User Guide for installation instructions.
  3. Install the GigaNet GNN1000 Adapter Driver software on each node connected to the GNX5000 Switch. Please refer to the GigaNet GNN1000 User Guide for installation instructions. Here are additional details if you are installing drivers provided by GigaNet:
    1. Remove any previous version of the GNN1000 Driver already installed. Removal requires the node to be re-booted.
    2. Use Start-->Setting-->Control Panel-->Networks-->Adapters-->Add to install the driver.
    3. Click Have Disk... and specify the Driver directory on the CD-ROM. For example, if F: is your CD-ROM drive, then you would use F:\Driver
    4. Select "GNN1000 NDIS Adapter" and then click OK.
    5. Configure Network protocols to complete the installation.

    GigaNet Adapter Driver software is also available on GigaNet's web site, http://www.giganet.com. Please refer to the download and installation instructions found on the support page of GigaNet's web site.

    The installation of the GNN1000 Adapter Driver causes the node to re-boot.

  4. The GigaNet cLAN Management Console (GMC) can be used to test the integrity of the GigaNet Interconnect. The GigaNet cLAN Management Console is comprised of two parts: the Console, and the Agent. The Agent must be installed on all nodes in the cluster. The Console can be installed on any network node that has access to the nodes in the cluster. The most versatile and recommended installation is that which has both the Console and the Agent installed on each node in the cluster.

    Install the GigaNet cLAN Management Console. Please refer to the GigaNet GNN1000 User Guide for installation instructions and additional information about the cLAN Management Console. Here are additional details on the installation procedure:

    1. Insert the cLAN Software CD into the CD-ROM drive.
    2. Wait for the CD automatic installation menu to appear.
    3. Click on "Install cLAN Management Console."
    4. Repeat this installation procedure on each remaining node in the cluster.

    GigaNet cLAN Management Console software is also available on GigaNet's web site, http://www.giganet.com. Please refer to the download and installation instructions found on the support page of GigaNet's web site.

    The installation of the cLAN Management Console may cause the node to re-boot.

  5. Test that the GigaNet Hardware is working. This can be done by doing the following:
    1. Open the GMC. (Programs-->GigaNet-->cLAN Management Console)
    2. A dialog box is displayed showing all accessible machines in the LAN. Press ESC.
    3. Select Console-->Local from the menu bar.
    4. Confirm that all the members in the cluster are shown and that they are all "Active".
    5. Select Utilities-->VI Throughput from the menu bar. This will run a throughput test to check that the data is actually going through the hardware.
    6. Enter in uppercase letters the computer names of the two nodes you wish to use in the test. Identify the local node as the source node.
    7. Click Start Measuring. You should see data being transferred at a rate of at least 65 MB per second.
    8. Click Stop Measuring to stop the connection test.
    9. Repeat the test for the other nodes in the cluster by measuring throughput between the local node (Source) and the other nodes (Sink).

    If the connection test does not appear to be working, refer to the troubleshooting sections of the GigaNet GNN1000 User Guide and the GigaNet GNX5000 User Guide.

Refer to DB2 Enterprise - Extended Edition for Windows Quick Beginnings for information on how to install and implement DB2 Universal Database for Windows NT.

Setup Procedure for ServerNet Interconnect

The list of the hardware and software required to setup this environment include the following products:

The following are the steps required to ensure that ServerNet Interconnect can work with DB2 Universal Database. Each step is a summary of what is required at each step: all of the details associated with each step are not presented here. You should also use the referenced documentation at each step which does provide detailed instructions and direction needed.

The steps shown below also assume that you are only using up to six (6) nodes in the cluster. Contact ServerNet if you have a requirement to use more than six nodes.

Here are the steps:

  1. Install the ServerNet Network Interface Card. Please refer to the ServerNet-I Virtual Interface Software Release Document, (product ID N0031) for installation instructions.
  2. Install the ServerNet Switch 1. Please refer to the ServerNet-I Virtual Interface Software Release Document, (product ID N0031) for installation instructions.
  3. Uninstall previous ServerNet drivers. (Skip this step if this is your first time installing ServerNet.)
    1. Open the Network control panel. (Start-->Setting-->Control Panel-->Network)
    2. Click on the Adapters Tab.
    3. Remove Tandem ServerNet PCI Adapter Driver.
    4. Click on the Services Tab.
    5. Remove SANMan.
    6. Click on the Protocols Tab.
    7. Remove Tandem ServerNet-I VI Protocol.
  4. Install the Tandem ServerNet PCI Adapter Driver. Here are additional details if you are installing using the software CD provided by ServerNet:
    1. Open the Network control panel. (Start-->Setting-->Control Panel-->Network)
    2. Click on the Adapters Tab. (The Adapters screen appears.)
    3. Ensure the new ServerNet driver is placed in a separate drive and/or directory. Then, from the command prompt referencing the correct drive and/or directory, type "ernnn.exe -d" to start the self-extracting program. ("ernnn.exe" is the name of the Engineering Release followed by a number -- ERnnn.EXE -- that identifies the specific version of the ServerNet driver to be installed.)
    4. Change to the drive and/or directory where the extracted files are located. Change to the "Spad n.n.n \ Free" subdirectory (where "n.n.n" is the specific version of the product). (If you are working in a troubleshooting or a development environment, then change to the "Spad n.n.n \ Checked" subdirectory instead of the "Spad n.n.n \ Free" subdirectory.)
    5. Rename the "oemsetup.multi_node" file to "oemsetup.inf".
    6. Choose Add in the Adapters Tab. (The Select Adapters screen appears.)
    7. Click Have Disk.... (The Insert Disk screen appears.)
    8. Enter the drive and/or directory where the oemsetup.inf file is located.
    9. Ensure the dialog box shows "Tandem ServerNet PCI Adapter Driver" and then click OK. Ensure the list of adapters now shows the ServerNet adapter. Click Close.
    10. Choose Yes to restart the computer. Or, select No and continue installing SANMan and the VI Software Developer's Kit (SDK).
  5. Install SANMan. Here are additional details if you are installing using the software CD provided by ServerNet:
    1. Open the Network control panel. (Start-->Setting-->Control Panel-->Network)
    2. Click on the Services Tab. (The Services screen appears.)
    3. Ensure the new ServerNet driver is placed in a separate drive and/or directory. Then, from the command prompt referencing the correct drive and/or directory, type "ernnn.exe -d" to start the self-extracting program. ("ernnn.exe" is the name of the Engineering Release followed by a number -- ERnnn.EXE -- that identifies the specific version of the ServerNet driver to be installed.)
    4. Choose Add in the Services Tab. (The Select Services screen appears.)
    5. Change to the drive and/or directory where the extracted files are located. Change to the "SANMan n.n.n \Free" subdirectory (where "n.n.n" is the specific version of the product). (If you are working in a troubleshooting or a development environment, then change to the "SANMan n.n.n \ Checked" subdirectory instead of the "SANMan n.n.n \ Free" subdirectory.)
    6. Determine if the Switch is X or Y by looking at the light on the Switch. One light says "X", and the one light says "Y".
    7. If an X Switch, select X=1 and Y=0. Ensure all cables are connected to the X port on the network cards.
    8. If a Y Switch, select X=0 and Y=1. Ensure all cables are connected to the Y port on the network cards.
    9. Provide the port number of the switch to which the network card on the current machine is connected.
    10. Select "PC" for all six (6) ports.
  6. Install the Virtual Interface Protocol. Here are additional details if you are installing using the software CD provided by ServerNet:
    1. Open the Network control panel. (Start-->Setting-->Control Panel-->Network)
    2. Click on the Protocols Tab. (The Network Protocols screen appears.)
    3. Ensure the new ServerNet driver is placed in a separate drive and/or directory. Then, from the command prompt referencing the correct drive and/or directory, type "ernnn.exe -d" to start the self-extracting program. ("ernnn.exe" is the name of the Engineering Release followed by a number -- ERnnn.EXE -- that identifies the specific version of the ServerNet driver to be installed.)
    4. Choose Add in the Protocols Tab. (The Select Network Protocols screen appears.)
    5. Click Have Disk.... (The Insert Disk screen appears.)
    6. Enter the drive and/or directory where the extracted files are located.
  7. Test that the ServerNet Hardware is working. There are no test programs available. Instead, simply use DB2 to test the ServerNet hardware.

    If the hardware does not appear to be working, refer to the ServerNet-I Virtual Interface Software Release Document, (product ID N0031) for additional troubleshooting help.

Refer to DB2 Enterprise - Extended Edition for Windows Quick Beginnings for information on how to install and implement DB2 Universal Database for Windows NT.

Setup Procedure for Synfinity Interconnect

The list of the hardware and software required to setup this environment include the following products:

The steps required to ensure that Synfinity Interconnect can work with DB2 Universal Database are shown below. Each step is a summary of what is required at each step: all of the details associated with each step are not presented here. You should also use the referenced documentation at each step which does provide detailed instructions and the direction needed.

Each Synfinity System is packaged with a Synfinity Cluster Manager Software, Version 1.10 CD-ROM. The CD-ROM contains all of the necessary documentation and software to set-up the Synfinity Interconnect. In addition, the CD-ROM also contains the Synfinity Cluster User Guide.

If you have other VI hardware, software, and protocol installed, it may be necessary to remove all of them before installing your Synfinity interconnect.

Once installed, Synfinity interconnect is considered to be exotic hardware and may not be viewed through the Windows NT control panel.

Summary of steps:

  1. Install Adapter Cards
  2. Install Synfinity Cluster Manager Software
  3. Install Switches and Cables
  4. Test the Interconnect

Here are the steps:

  1. Install the Synfinity PCI Network Interface Card. Please refer to the Synfinity Cluster Manager Software User Guide for installation instructions.
  2. Install the Synfinity Cluster Manager Software on a node connected to the Switch. Please refer to the Synfinity Cluster User Guide for installation instructions.

    The node you select will be the Cluster Manager. This is the only node where you have to install the software from the CD.

    Once installed, you should run the Synfinity Cluster Manager software. The Cluster Manager will give you a cluster plan and help you through a step-by-step guide to configuring the network, and advise the best routing and cabling options. This step should be completed before any cables are connected to the Synfinity switches and network cards. As part of the planning process, the Cluster Manager will use the cluster plan to create installable diskettes for use on the other nodes. This will include the driver software for the cards that are on the other nodes. Refer to the Synfinity Cluster Users guide for complete details.

  3. Install the Synfinity Switch and Cables. Please refer to the Synfinity Cluster User Guide for installation instructions.
  4. Test that the Synfinity Hardware is working. This can be done by doing the following:
    1. On any system in the cluster, open a "Command Prompt" window in MS NT.
    2. Change directory to the "utils" subdirectory of where the Synfinity Cluster Manager software was loaded.
    3. Type "vitest" and note the node number that is displayed.
    4. Move to any other system in the cluster, open a "Command Prompt" window.
    5. Change directory to the "utils" subdirectory of where the Synfinity Cluster Manager software was loaded on this other system.
    6. Type "vitest x" where x is the node number from step 3 above.
    7. A "CONNECTION GOOD" message should be displayed.
    8. If a "NO CONNECTION" message is displayed, check cabling and hardware setup, refer to the Synfinity Cluster User Guide for further information troubleshooting the problem, also check the support web pages for "Tech-tips" at http://www.fjst.com/

Refer to DB2 Enterprise - Extended Edition for Windows Quick Beginnings for information on how to install and implement DB2 Universal Database for Windows NT.

Enabling DB2 to Run Using VI

Detailed installation information is found in DB2 Enterprise - Extended Edition for Windows Quick Beginnings.

After completing the installation of DB2 as documented in DB2 Enterprise - Extended Edition for Windows Quick Beginnings, set the following DB2 registry variables and carry out the following tasks on each database partition server in the instance:


[ Top of Page | Previous Page | Next Page ]