ORCHESTRATE APT Introduces Parallel Data Warehousing & Data Mining Application Development Environment for IBM DB2 Parallel Edition Applied Parallel Technologies, Inc. (APT) announces ORCHESTRATE, an advanced data-mining application-development environment for the IBM RS/6000 Scalable POWERparallel Systems (SP) server using IBM DB2 Parallel Edition (PE). APT has designed the ORCHESTRATE Development Environment to support construction of large-scale data-warehousing and data-mining software systems that fully exploit the capabilities of parallel computing and parallel RDBMS systems. By hiding the complexities of both parallel programming and the handling of large volumes of data, ORCHESTRATE enables commercial systems integrators and application developers to build these systems faster and at a lower cost. ORCHESTRATE Development Environment Features and Benefits Features: * Component-based parallel application-development environment for construction of data-warehousing and data mining-products * Hides the complexities of parallelism and operating on large volumes of data from developers. * Parallel data management toolkit includes functions to support loading, sorting and handling of data. * Parallel data-analysis toolkit provides statistical and advanced data-mining analytical tools. * Parallel Client provides parallel application connections to DB2 PE. * Supports application development on RS/6000 workstations. Benefits * Simplified programming model allows simpler, faster, lower cost construction of large-scale parallel software systems. * Parallelization of existing sequential software applications (for example, runs SyncSort in parallel). * Application portability across RS/6000 family, including symmetric multiprocessing (SMP) and MPP systems. * Extends parallelism of the parallel RDBMS system into the logic of the application. ORCHESTRATE is currently running at multiple beta sites and will be generally available in August 1996. Submitted by Yucel Uygungil of the IBM Software Solutions Toronto Laboratory. He can be reached by e-mail at uygungil@vnet.ibm.com APT Contact: Robert Utzschneider, APT (617) 494-1177, ext. 116 rlu@aptinc.com .pa LOTUS APPROACH 96 FOR WINDOWS 95 A Brand New Approach to DB2! Since Lotus has been acquired by IBM, Lotus Approach has been named as the end-user reporting and analysis tool for the DB2 family of products. The combination of the scalable, industrial-strength RDBMS capabilities of DB2 with Approach's PowerClick reporting, charting and other intuitive analysis features provides users with the optimal tools for managing client/server information. Lotus Approach 96 for Windows 95 includes breakthrough advances in usability, data analysis, team computing and integration capabilities. The product has garnered more than 50 awards and honors worldwide, each underscoring the most notable characteristic of Lotus Approach: it is incredibly easy to use. Here's what DB2 users have to look forward to: * Easily defining and storing SQL queries The Open SQL Assistant guides you step by step and helps you identify which specific SQL tables, rows and columns you want to work with. This helpful innovation lets you do complex queries without bogging down the server or the network with huge amounts of data. * Saving time on complex finds and sorts The Find Assistant makes complex finds easy by walking you through the process of defining the records you are seeking. It even shows you, in plain English, what criteria you've selected. You can then store your find and sort criteria for later use and instantly access named finds and sorts by selecting them from a drop-down list. * Generating reports with ease The Lotus Approach PowerClick reportwriter gives you unprecedented power by just clicking and dragging. Unlike "banded" report writers, for example, Approach's WYSIWYG report design feature lets you design just what you need by letting you see exactly how the data will appear in print as you work. You can move fields simply by dragging them where you want them to go. You can also save time while you design reports on large sets of data by selecting a small subset of records to use during the design phase. You'll view live data as you design; once you're done, you can report on the full set of data. * Analyzing data with a click PowerClick analysis lets you sort, group or summarize data simply by choosing a column and clicking on SmartIcons. You can generate totals, averages, counts, minimums, maximums, variance and more with simple mouse clicks. Create multiple summary panels and calculations with ease. * Summarizing intelligently Perform analysis easily by positioning calculations where desired and letting Lotus Approach instantly interpret the context. For example, a total placed after a grouping will reflect the total for that group only. Use the same total at the end of the report and it will display the total for the entire column. * Performing cross-analysis with drag and drop With crosstab view, you just drag and drop fields to define your desired crosstab, then perform advanced analysis by obtaining the sum, count, average, standard deviation, minimum, maximum or variance of any cell, row or column. * Drilling-down to data Drill-down on a section of your chart or crosstab summary to instantly see the data set that makes up a row, column, single cell or result. Specify how you want to display the data or results _ in a form, report or worksheet. Find out more Lotus Approach enjoys a rich history of breakthrough and innovation. Its built-in usability, speed and power make users productive quickly while helping them get the most out of their information. Now, DB2 users can finally share in the fun. To find out more about Lotus Approach in your environment, come to the DB2 Technical Conference in Miami Beach from October 14 - 18, 1996. To enroll or get more information, call 1-800-IBM-TEACH (426-8322) or 001.520.574.4500 and ask for "Conferences". (See the DB2 Conference article on page 4.) .pa LETTER FROM JANET PERNA: DB2 FAMILY CONTINUES TO GROW The first half of 1996 has been a very busy one for the DB2 team and all of our partners as we continue to expand the DB2 family to address your requirements for more platforms, function, and solutions. It's also been a busy time for those of you who have been deploying the DB2 family of products in your enterprises. The number of DB2 licenses continues to grow with over 800,000 to date and there are over 10,000 enterprises now using the DB2 Common Technology. DB2 2.1.1 is now available on OS/2, Windows NT, AIX, HP-UX, Sun Solaris, and Siemens. We will begin a Beta program on SCO Unix in June. In March, IBM announced an integrated family of software servers for OS/2, AIX, and Windows NT. These include Database, Transaction, Internet Systems Management, Communications, and Lotus Notes. The Database Server, based on DB2 2.1.1, is now available on OS/2 and AIX and will be shipping on Windows NT shortly. We achieved major milestones last month with the publication of leading TPC-C results on two different Sun platforms, and were first in the industry to publish TPC-D results for a 300 GB database with DB2 PE on the RS/6000 SP2 and first to publish a TPC-D result on Windows NT. These results demonstrate our commitment and ability to provide leading database performance regardless of platform. On the S/390 front, DB2 MVS 4.1 is going strong as is the latest release of DB2 VM/VSE (SQL/DS 3.5), and DB2 SMP for OS/400 brings a new level of parallelism to the AS/400. The DB2 Software Partners have been busy as well. The number of available applications and tools support DB2 continues to grow with well over 1,000 now available . . . and we continue to welcome new partners into our DB2 Partners in Development program. In April, IBM used DB2 Expo in San Francisco as an opportunity to make two major Data Management announcements in the area of data mining and decision support. The IBM Intelligent Miner and the IBM Intelligent Decision Server are solutions built on the DB2 Family of database products that will enable a significant new class of applications that turn data into information. The ability to use the massive amounts of data stored in operational systems to drive business decisions is a critical requirement to our businesses, and IBM is happy to be in a leadership position in providing these solutions. Last month, a number of DB2 experts participated in the DB2 Internet Roundtable forum. We had the opportunity to chat with many of you in this way and hope that you enjoyed it as much as we did. Last, but not least, two DB2 customers received awards at the VLDB Conference. Tel-Way Japan received the award for the largest Unix Decision Support database with DB2 PE, and UPS was awarded the prize for the largest OLTP database with DB2 MVS. Over the next several months, we are working on an upgrade to DB2 PE as well as another technology uplift to DB2 V2. We are also busy moving our replication, DataHub, and DataJoiner products to a number of platforms in conjunction with the DB2 Common Server. We continue to pick up the pace in delivering world-class products and solutions to you. Once again, thank you for your commitment and support of the DB2 Family. We appreciate your business. Janet .pa DB2 COMMON SERVER CERTIFICATION PROGRAMS BE SMART! BE CERTIFIED! You definitely know your stuff. But wouldn't it be nice to be able to prove it _ to your boss, your peers, your customers and yourself? Now you can make sure you have the proper skills and demonstrate your knowledge by becoming professionally certified in your area of expertise. DB2 (Common Server) Version 2 In the DB2 Version 2 workstation arena, IBM offers certification in two areas. After passing the DB2 Fundamentals exam, you can choose to complete the DB2 Administration exam to become a Certified Database Administrator or the DB2 Application Development exam to become a Certified DB2 Application Developer. To get more information such as test objectives, sample tests and education information, call 1-800-IBM-4FAX (001.415.855.4329) and request the following documents: #1415 DB2 Certification Program general description #1416 DB2 Fundamentals (V2) exam objectives (exam #500) #1418 DB2 Administration (V2) exam objectives (exam #501) #1419 DB2 Application Development (V2) exam objective (exam #502) The following courses from IBM Education and Training will help you prepare for these exams: * DB2 Fundamentals Exam (#500) - U4250 DB2 Common Server Overview and Functions * DB2 Database Administrator Exam (#501) - U4263 Database Administration Workshop for DB2 for OS/2 - U4264 Database Administration Workshop for DB2 for AIX * DB2 Application Developer Exam (#502) - U4228 DB2 Programming Fundamentals for UNIX, OS/2 & DOS - U4229 DB2 Intermediate Programming for UNIX, OS/2 & DOS Call 1-800-IBM-TEACH (426-8322) in North America to enroll in a course, get up-to-the minute schedules, or inquire about a private class at our place or yours. Outside the U.S., please call 001-520-574-4500 from 8 a.m. to 8 p.m. EST. You can also learn more about IBM's education offerings as follows: * By internet on the World Wide Web at http://www.training.ibm.com/usedu * By fax at 1-800-IBM4FAX (001-415-855-4329) and selecting document number 0007 for an education index. Submitted by Amiet Goldman, Senior Marketing Strategist, IBM Education and Training, New York. She can be reached at (212) 745-3824. .pa ACCESS AND MANAGE DATA IN YOUR MULTI-PLATFORM, I/S WORLD DB2 TECHNICAL CONFERENCE OCTOBER 14 to 18 FONTAINEBLEAU HOTEL MIAMI BEACH, FLORIDA $1575 DATA WAREHOUSE TECHNICAL CONFERENCE OCTOBER 14 to 16 FONTAINEBLEAU HOTEL MIAMI BEACH, FLORIDA $775 Announcing two outstanding 1996 conferences designed to help you maximize performance from your database systems! The 1996 DB2 Technical Conference and the 1996 Data Warehouse Technical Conference share the common goal of helping you transform data into decision-making information that can be accessed and exploited for greater personal and organizational success. Both conferences offer the in-depth information you need to meet your most pressing issues and challenges. At either conference, you will be able to: * Examine the latest releases, products and solutions that help maximize current systems, maintain control over emerging technologies and preserve your legacy investment. * Explore Network-Centric Computing with breakthroughs in application accessibility that enable you to take full advantage of global network connections. * Dig into Data Mining and "what if" scenarios that yield rich, new information from existing data. * Learn about the entire family of DB2 databases for help in re-hosting decisions. * Try out the latest hardware and software solutions at this year's exceptional Product Expo. * Gain insights from product developers and other industry experts that help you speed implementation, avoid problems and maximize your capabilities. * Network with peers, exchange ideas and gain fresh answers to your mutual challenges. If you're a DB2 professional, mark the 1996 DB2 Technical Conference on your calendar, October 14-18 in Miami Beach. It is the source for the most current information on the entire family of DB2 products at the core of today's client/server database solutions. In 4 1/2 days of comprehensive education, you will: * Learn what's new in DB2 - including the new Version 4.2 release for MVS and the latest from DB2 Common Server. * Enhance understanding of OS/390, the next step in the transformation of System/390 into an open, enterprise-wide, network-centric, client/server operating system. * Benefit from more than 90 in-depth elective sessions on the hottest topics, including: DB2 for OS/2, AIX, HP-UX, NT, Sun Solaris, AS/400, VM and VSE, plus DB2 Parallel Edition, DataHub, DataJoiner and replication with DataPropagator. This Technical Conference will be the biggest and best ever! While you're there, attend any session offered at the Data Warehouse Technical Conference for free! Data Warehouse enthusiasts - circle October 14-16 for the 1996 Data Warehouse Technical Conference, also taking place in Miami Beach. This is your opportunity to learn about the best and brightest tools for constructing, monitoring and managing your data warehouse. In 2 1/2 days of in-depth education, you will be able to: * Use the latest data warehouse solutions to transform data into decision-making information. * Hear the latest tips and tactics from experienced industry experts who have built data warehouses. * Attend a range of in-depth elective sessions designed to help you master your most pressing challenges, including capacity, recoverability, availability, security, scalability, and ease-of-use. * Learn about vital topics, such as replication, OLAP and data mining. * Compare relational and multidimensional database approaches. CALL NOW! Don't miss these conferences and the opportunity to boost your database performance for years to come. To register or request a conference brochure, call 1-800-IBM-TEACH (1-800-426-8322), and ask for "Conferences". If you are calling from outside the U.S.A. or Canada, please call our international number 001-520-574-4500. .pa IDUG IS CHANGING TO MEET CHANGING NEEDS IDUG European Conference October 21 to 24 Amsterdam, The Netherlands IDUG Asia Pacific November 13 to 15 Melbourne, Australia The International DB2 Users Group (IDUG) is an independent, not-for-profit, user-run organization whose mission is to support and strengthen the information systems community by providing the highest quality education and services designed to promote the effective utilization of the DB2 Product Family. Now, what do these words mean to you as a user of the DB2 Product Family? It means IDUG has the unique ability to bring you the information you need to improve your skills, become more valuable to your company and advance your career. Founded in 1989, IDUG focused on the DB2 for MVS systems programmer and DBA. As the industry and responsibilities of DB2 professionals have evolved, so has IDUG. Today, IDUG's education and services also help systems developers, data architects, data administrators, systems analysts, client/server consultants, technical managers and users working with DB2 on all platforms to improve their skills. IDUG is further expanding to meet the needs of DB2 professionals who work in a heterogeneous environment. In the future, IDUG will focus on topics such as: * connectivity to or from DB2 to any other database engine; * management of DB2 and any other database engine; * propagation to or from DB2 and any other database engine. This expansion began on the exhibit floor at the North American Conference in Dallas, June 2 to 6, 1996. The vendors were encouraged to talk about their complete product lines, so that attendees could see solutions that work with DB2 and other DBMSs. IDUG is also expanding its member services to the World Wide Web. In addition to IDUG's printed publications, The IDUG Solutions Journal and the GLOBE, IDUG is building a comprehensive web site. The IDUG web site (http:\\www.idug.org) currently has member and conference information available. Within the next few months, IDUG's web site will also include a job bank for DB2 professionals. And, in the IDUG tradition, the web site will continue to expand to meet the changing needs of its membership. As stated above, IDUG's goal has always been to support and strengthen the information systems community by providing the highest quality education and services to its members. How is IDUG able to accomplish all this? Again, we refer back to the IDUG mission statement. Because IDUG is an independent organization, it is able to focus on the needs of its members rather than on the desires of a vendor. Although IDUG maintains a close relationship with IBM and other vendors, vendors do not dictate the direction of IDUG or the technical content of its conferences or publications. IDUG knows the needs of DB2 users because the organization is run by DB2 users. For this reason, it knows the market trends and what sort of topics that are of interest to its members. IDUG's not-for-profit status means that all the money made from its conferences is spent expanding member services. Over the years, this has allowed IDUG to expand to serve DB2 users world-wide. In addition to its North American conference, four years ago IDUG began hosting a conference in Europe. And, last year IDUG added an Asia Pacific conference. As you can see, IDUG works very hard to adhere to its mission statement. It is always changing and growing to provide the highest quality education and services to you, the DB2 family of products user. May your DBMS be with you, Freddy Hansen IDUG President CALL FOR PRESENTATIONS International DB2 Users Group (IDUG) 9th Annual North American Conference May 11 to 15, 1997 Chicago Hilton and Towers Chicago, Illinois SPEAK AT THE MOST PRESTIGIOUS DB2 EVENT NEW for 1997, IDUG is expanding the types of presentation it is accepting. In addition to DB2- specific papers, IDUG is looking for papers that will help its members who work in a heterogeneous environment. If your knowledge, experience and expertise includes working with the DB2 Product Family and any other database engine, share it by speaking at IDUG's 9th Annual North American Conference in Chicago. Call IDUG Headquarters at (312) 644-6610 for a "Call for Presentations" form. .pa DB2 EXPANDS ON THE WORLD WIDE WEB IBM is committed to the World Wide Web (WWW) and is continually updating and expanding its Web pages. The DB2 team has been busy improving its own set of Web pages. The recent availability of the DB2 Product and Service Technical Library means more technical information than ever before for DB2 users on the Web. An exciting footnote is that we built this Library entirely on DB2 technology, using products such as DB2 for AIX, DB2 WWW Connection, and DB2 Text Extender. The DB2 Product and Service Technical Library brings you information on DB2 for OS/2, DB2 for AIX, DB2 for Windows NT, and other DB2 for common server products. Using the Library, we can give you the latest DB2 information available. And we plan to update the information with product changes, comments from the DB2 team and from users _ like you. The Library includes a core set of DB2 common server books, including the SQL Reference, the Administration Guide, the Command Reference, and other books written for the DB2 common server platforms. It also contains step-by-step instructions on how to perform key tasks, technical notes with the latest information on bugs, workarounds, and frequently asked questions from the customer help lines. You can access the information in the Library in various ways. A powerful keyword search lets you search documents by words or phrases. Now you can search through multiple documents without having to flip through pages or start multiple online books. You can also use the Library to access books through their tables of contents, browse through a list of recent additions, and check out the list of the top ten frequently asked questions. We have exciting plans to expand the DB2 Product and Service Technical Library. For example, we will be including platform-specific books and white papers, and we're working on a feature to help you find information on particular tasks you want to perform. The Library is just one of the many additions we're making to the DB2 World Wide Web pages. We hope you'll be visiting our pages often in the coming months, as we continue to provide more information and features. DB2 Information Everywhere You often need new information after the books are published. For DB2, you might need to know about performance issues, connectivity problems, changes to third party software, and so on. Fortunately, the DB2 Product and Service Technical Library now gives you an interactive way to access the latest information. LOOK NO FURTHER FOR DB2 INFORMATION: Some other sources of up-to-date DB2 information include: World Wide Web http://www.software.ibm.com/data/db2 The DB2 Web pages provide current information such as DB2 news, product descriptions, education schedules, world-wide phone access for DB2 support, the DB2 Product and Service Technical Library, and more. CompuServe IBM DB2 Family Forum (GO IBMDB2) All DB2 products are supported through these forums, as well as IMS products. Internet Newsgroups comp.databases.ibm-db2 bit.listserv.db2-l These newsgroups are available for DB2 users to discuss their experiences with the products. Anonymous FTP Sites ftp.software.ibm.com In the directory /ps/products/db2, you can find demos, fixes, information, and tools for DB2 and many related products. Written by Steve Gaebel, Database Information Developer and WWW advocate, IBM Software Solutions Toronto Laboratory. He can be reached at (416) 448-3509 or by e-mail at sgabel@vnet.ibm.com. .pa PLATINUM technology ANNOUNCES FAMILY OF TOOLS FOR MICROSOFT SQL SERVER, SYBASE Secures its position as leading Microsoft SQL Server, Sybase tools provider Oakbrook Terrace, IL, March 27, 1996 - Extending its technical and market leadership in delivering solutions for managing client/server databases, PLATINUM technology, inc., today announced four new tools for Microsoft SQL Server and Sybase. With the addition of these tools, PLATINUM now offers over 25 software products for Microsoft SQL Server and Sybase - more than any other third-party database management product vendor. PLATINUM provides Microsoft SQL Server and Sybase users products for database management, data warehousing, and database application development. PLATINUM's four new tools are: * PLATINUM SQL-Archive for Sybase and Microsoft SQL Server, a tool that increases database administrators' productivity by automating both logical and physical backups; * PLATINUM Fast Unload for Sybase and Microsoft SQL Server, a utility that speeds data unloading to minimize the time data is unavailable to users; * PLATINUM Fast Load for Sybase and Microsoft SQL Server, a utility that performs data loading up to six times faster than bcp (bulk copy program), a utility available from both Sybase and Microsoft; and * PLATINUM TSreorg for Sybase, the first tool for Sybase users that reorganizes database tables, indexes, and devices to maximize database performance. While each tool is available individually, together the tools provide Microsoft SQL Server and Sybase users with a comprehensive solution for automating and speeding mundane administrative tasks, and tuning database performance. PLATINUM's Strategy and Integration Plans SQL-Archive, Fast Load, Fast Unload, and TSreorg are part of PLATINUM's product strategy to bring best-of-breed data and systems management to the open enterprise environment (OEE) _ the networked, heterogeneous computing environment that includes MVS-, UNIX-, and Windows-based platforms and may contain MVS, AS/400, and OS/2 systems. Availability, Platform Support and Pricing SQL-Archive is scheduled to be generally available in April 1996. It supports Sun Solaris V2.3 or above, IBM AIX V3.2 or above, and HP-UX V9.0 or above. SQL-Archive is compatible with Sybase SQL Server V4.2 to System 11, and Microsoft SQL Server V4.2 and V6.0. Prices start at $1,500 (U.S.) per server. Both Fast Load and Fast Unload support Windows NT, and UNIX platforms such as Sun Solaris V2.3 or above, IBM AIX V3.2 or above, and HP-UX V9.0 or above. Both utilities are compatible with Sybase SQL Server V4.2 and System 6.0, and Microsoft SQL Server V4.2 and V6.0. Fast Unload is also compatible with Sybase System 11. Both utilities are scheduled to be generally available in April 1996. Prices for each start at $500 (U.S.) per server. TSreorg for Sybase is scheduled to be generally available in June 1996. It supports UNIX platforms, including Sun Solaris V2.3 or above, IBM AIX V3.2 or above, and HP-UX V9.0 or above. TSreorg for Sybase is compatible with Sybase System 10. PLATINUM is currently developing TSreorg for Microsoft SQL Server. Prices start at $2,000 (U.S.) for a TSreorg console (client); and $4,500 (U.S.) for each agent (server). Additional information about PLATINUM technology, inc. is available via the World Wide Web at http://www.platinum.com. PLATINUM technology, inc., delivers the payoff on information technology (IT) investments by providing tools that enable organizations to efficiently manage next-generation computing solutions while leveraging existing and legacy systems. The company provides application development, business intelligence, database administration, data warehousing, and systems management software solutions for the Open Enterprise Environment (OEE) _ the networked computing environment that incorporates mainframe, desktop, and open systems. For more information, contact: Jan Scharlow / Rich Dobinski David Kitchen / Peter Gorman PLATINUM technology, inc. Copithorne & Bellows (708) 620-5000, ext. 1990 (617) 252-0606, ext. 242 dobinski@platinum.com peterg@ca.cbpr.com .pa DATAPROPOGATOR EXTENDS SUPPORT TO DB2 FOR MVS VERSION 4 Announced March 26, DataPropagator Relational Version 1 Release 2 Modification 1 is a component of IBM's Data Replication solution. It provides powerful enhancements built on DB2 for MVS Version 4 and improves product usability with an enhanced graphical user interface (GUI) that recognizes user-created target tables. This enhanced GUI gives users the options they have been asking for. Now, they can build their own target tables if they choose not to use DataPropagator Relational's automatic tablebuilding function, and they can specify whether or not to drop a target table while cancelling a subscription. DataPropagator Relational Version 1.2.1 supports the DB2 for MVS Version 4 data sharing environment. DPROPR Capture for MVS exploits the new merged log Instrumentation Facility Interface (IFI) to reliably propagate shared data in any S/390 Parallel Sysplex environment. Also, DataPropagator Relational now captures and propagates data from the compressed tables in DB2 Version 4. Users can expand their use of the compressed tables to gain additional disk savings. This new modification can also have significant performance advantages. Users who today are using DPROPR Apply for AIX or DPROPR Apply for OS/2 to propagate data from DB2 Common Server sources to DB2 for MVS targets can now achieve improved performance by using DPROPR Apply for MVS instead. (Note that these target databases can be DB2 for MVS Versions 2.3 and 3.1 as well as 4.1.) This new modification also gives more scheduling options for capturing data because it supports reading data from the DB2 Version 4 archive log. For example, users now can defer capturing data from a DB2 Version 4 source to off-peak periods. To summarize, users can: * Reliably propagate shared data from the DB2 Version 4 shared environment; * Defer data capturing to off-peak times with support for retrieval from the DB2 V4 archive log; * Save disk space and propagate with new support for replicating DB2 Version 4 compressed tables; * Build their own target tables using an enhanced GUI that recognizes user-created tables, and also lets them retain their target tables after canceling a subscription. Multivendor Support . . . IBM's Data Replication solution supports data replication across heterogeneous data models and multivendor environments. In addition to DataPropagator Relational, the following components in our Data Replication solution interoperate to deliver robust, versatile replication among heterogeneous data models and environments: * DataPropagator NonRelational * DataRefresher * DataHub * DataJoiner .pa DATAHUB FOR UNIX OS SUPPORTS MORE CLIENT/SERVER DATABASES Announced March 26, DataHub for UNIX Operating Systems Release 2 lets organizations operate more efficiently, while expanding aggressively into client/server computing and data warehousing. This powerful tool helps administrators manage their diverse database environment from a central control point. Key New Functions of Release 2 Out-of-the-box monitoring An important new aspect of automated operations is the addition of "out-of-the-box" monitoring. To get started quickly and with minimal effort, users can turn on a defined set of standard conditions that monitor CPU utilization, number of deadlocks detected, and numerous other conditions. DataHub users are alerted when specified conditions are exceeded so they can take corrective action. Console independence DataHub agents can take action independently of the console. With intelligence at each server for correlation and evaluation, network traffic -- and operating costs -- are reduced. Database discovery When a new managed system is added, DataHub can search for existing databases, speeding the time required for configuring systems. Grouping functions DataHub's grouping function allows users to perform database systems management tasks on a number of managed objects with a single action. Ability to work with groups of objects reduces repetitive tasks and simplifies the way objects are displayed in the DataHub window. Backup and recovery of Oracle databases DataHub's graphical user interface lets users back up and recover Oracle databases, redo logs, rollback segments, and control files. They can do all this with DataHub without buying any other software. DataHub for UNIX Operating Systems helps organizations manage an increasing number of different database systems from a central control point. In Release 2, new databases and platforms are added to the environment that DataHub manages. Supported Databases _ DB2/6000 Version 1 _ DB2 for AIX Versions 2.1 and 2.11 _ DB2 for HP-UX Version 2.1 _ DB2 for Solaris Operating Environment Version 2.1 _ INGRES 6.4 _ ORACLE Releases 7.0, 7.1 and 7.2 _ SYBASE SQL Server Releases 10 and 11 DataHub supports the following operating platforms: _ AIX Versions 3.2.5, 4.1, and 4.2; _ HP-UX Releases 9 and 10; and _ Sun Solaris Versions 2.3 and 2.4. DataHub for UNIX OS provides a full range of database management functions, online monitoring of databases and operating systems, and expert system technology for automating database operations based on company policy -- all in a single product. Key New Functions of DataHub for UNIX Operating Systems Release 2: * Out-of-the-box monitoring * Console independence * Database discovery * Grouping Functions * Backup and recovery of Oracle Databases .pa ENHANCED DB2 FOR OS/400: DATABASE RECOVERY MADE EASY Do you know how long your machine would be down if it failed during the peak of your system activity? You may not be aware of this potential down time until it actually happens. Waiting for a system crash is the hard way to learn the answer. There are several recovery steps that occur after an abnormal system end. Of these, the rebuilding of access paths (indexes) is usually the most time consuming. The first step in resolving this problem is to determine how badly you are exposed. The hard way to do this is to experience an abnormal IPL (Initial Program Load). Now, there's an easy way to determine your exposure, using the Edit Recovery for Access Paths (EDTRCYAP) command. This command displays, at any point in time, how long it would take to recover indexes if the system were to abnormally terminate. It's up to you to then decide how much time you can afford. Let's discuss two methods you can use to limit access path rebuild exposure: System-Managed Access Path Protection (SMAPP) and explicit access path journaling (logging). SMAPP is new for Version 3 Release 1 of OS/400 and DB2 for OS/400 and provides an easy way to limit access path rebuilds. Explicit access path journaling has been available since the first release of OS/400, but several new options are available in this release. SMAPP SMAPP is easy to use because the system does almost all the work. Administrators simply ask themselves one question: "How much time can my operation afford to be down while we rebuild access paths?" SMAPP will automatically protect access paths, using an internal system log, so that the recovery time is met. How Does SMAPP Work? SMAPP determines which access paths should be protected based on two time estimates: a system recovery time and a current recovery time. The former is the goal specified by the user, or the answer to the question "How much time can I afford rebuilding access paths?" The estimated recovery time is the approximate time recovery would actually take if the system were to crash at this instant. For tables that are not logged, this consists of the access path's rebuild time. For tables that are journaled (either by SMAPP or by a user), it is the time required to apply all of the journaled key changes to the access path, which happens near the end of an abnormal IPL. Only access paths affected by key changes are included in the estimate. When a table has not been changed, or if updates never change a record's key fields, the access path is not exposed to invalidation (that is, it is not subject to a rebuild). Access paths are also not included in the estimate when invalidated by a user-initiated function. This happens, for example, when an underlying physical table is restored (RSTOBJ) or reorganized (RGZPFM). If the estimated time begins to exceed the target, SMAPP will start protecting more access paths. If the estimated time drops significantly below the target, SMAPP will end some access path protection. As one might guess, SMAPP protects the largest exposed access paths first. This tends to provide the maximum protection with the smallest journaling overhead. The EDTRCYAP command is used to change the access path recovery time and to display the current estimate. This allows administrators to tune this recovery feature for their environment. For systems with user Auxiliary Storage Pools (ASPs), the recovery time can be specified for each ASP, rather than for the system as a whole. Four special values are available for the target recovery times _ *SYSDFT, *NONE, *MIN, *OFF. The initial value for the system recovery time will be *SYSDFT ("system default"). This value will be preset to give users reasonable access path protection without causing excessive performance degradation, so you may not have to change this setting. The value *NONE means that no recovery time is specified, which is the default for the ASPs. When all the ASP times are set to *NONE, the system recovery time will be used and access paths will not be managed at the ASP level. Likewise, the system recovery time could be set to *NONE and individual ASP times could be specified. If both the system recovery time and a given ASP recovery time are set to *NONE, then none of the access paths in the ASP will be protected by SMAPP. At the other extreme, the value *MIN represents the minimum recovery time. It will force all the access paths on the system or ASP to be protected. The final value is *OFF. This ends all SMAPP protection on the system. The same thing happens when *NONE is specified for the system and for all ASPs. But *OFF also prevents the bookkeeping used to determine the estimated recovery times. The bookkeeping should have a negligible effect on performance, so using *OFF is not advised. The value *NONE will provide valuable information with virtually no performance penalty. Advantages In addition to ease-of-use, SMAPP has several other benefits. Disk space is saved by periodically changing and deleting the internal journal receivers that contain the journal data. Also, SMAPP only deposits entries into journals for operations that affect access paths. With explicit journaling, once an access path and all the associated tables are journaled, all record changes result in journal entries. SMAPP has a different goal. Recovery using SMAPP simply guarantees that the access paths will not have to be rebuilt (for example, that changes in an access path are consistent with changes in the associated tables), not to ensure that no data is lost. Thus, if a record change does not affect a key field, SMAPP does not have to journal the update. Other changes were made to enhance performance. When journal receivers are created in a given ASP, up to ten disk units in the ASP are used to allocate storage, which allows disk writes to be done in parallel. In the past, when more than ten units were available in an ASP, journal receivers simply used the first ten configured units. But now SMAPP will use the ten fastest units in the ASP. Performance can be improved significantly when these fast disk units or their IOPs contain write cache. Normally, processes writing journal entries have to wait for an I/O write to complete before proceeding, to guarantee that the journal information is safely on disk before any tables are actually modified. But when write cache is used, the information simply has to be copied to an internal buffer, after which the process can continue. (A battery backup preserves the information if the system crashes.) This typically improves performance by 20 to 30 percent; SMAPP will automatically find the faster units and allocate storage on them. Examples of such devices include the 9337 disk units (models 2xx/4xx) and the Clearwater IOP. Refer to the Programming Performance Capabilities Guide for more detailed information. Usage Considerations SMAPP is fully compatible with explicit journaling. If an access path is explicitly journaled, SMAPP will not factor its rebuild time into the system exposure. If a table is already journaled and its access paths are not, the user's journal receiver may begin to grow more rapidly. This will happen if SMAPP decides to journal the access path. Since an access path has to use the same journal as the associated tables, SMAPP will deposit the access path's journal entries into the user's journal receiver. With the new enhancements for user journals discussed below, this problem should not be a great concern. SMAPP will not protect access paths in the QTEMP library. They disappear when a system is IPLed, so they would not even survive a crash. Access paths that have rebuild maintenance or those that are in "forced keyed access path" mode also are not protected. These restrictions are already enforced for explicit journaling. Note that the "forced keyed access path" mode is simply an alternative to journaling. Because this forces access path changes to disk after every key change, the access path is less likely to be invalidated in the event of a system crash. But journaling does a better job of accomplishing the same goal, with better performance! Assuming the system's recovery time is acceptable, it would be advisable to set FRCACCPTH(*NO) for all tables once Version 3 Release 1 is installed. Because SMAPP deletes journal receivers when they are no longer needed, SMAPP journals do not provide a reliable audit trail. For instance, SMAPP receivers are automatically deleted when a system is IPLed. In addition, two functions cannot be done with SMAPP journaling: commitment control and the application/removal of journal changes (APYJRNCHG/RMVJRNCHG). These commands need a persistent journal environment; SMAPP may want to end journaling at any time. Explicit logging can be used for these purposes. Access Path Journaling The second method to limit IPL access path rebuild exposure is by using the explicit access path journaling support. With this function, the user decides which access paths should be protected and starts journaling for those access paths via the Start Journal Access Paths (STRJRNAP) command. By explicitly journaling access paths, users can choose those access paths that are most critical to their business and ensure that they are available soon after an abnormal IPL. The user may want to do this even with the SMAPP support because SMAPP may not choose to protect those access paths that the user considers important. Some management overhead is incurred by the user in managing the journal environment required for explicit access path journaling. With Version 3 Release 1, there are some new journal management functions that can simplify managing the journal environment and make access path journaling more attractive. For more information on Access Path Journaling, see the Backup and Recovery Advanced book. Journaling Improvements There are three new options available for a journal that can lessen the pain of journaling, both for indexes and tables. 1. Remove Internal Entries Option If you have tried access path logging in the past and could not live with the additional disk storage required, you may want to try the new receiver size option that is available on the Create Journal (CRTJRN) and Change Journal (CHGJRN) commands as RCVSIZOPT(*RMVINTENT). With this option, internal entries used only for IPL recovery are removed from the receiver when they are no longer needed. This limits the amount of additional storage needed to perform access path journaling. This option is especially attractive for users who are currently using journal for some of their tables and who want to use SMAPP or explicit access path journaling for the tables. If the tables being logged are chosen for protection via SMAPP or access path journaling is started for them, then there will be an impact on the rate at which the receivers grow as access path entries will be added to the receiver. Using the *RMVINTENT option causes the additional access path entries to be discarded when they are no longer needed. Also, these entries are no longer saved to media when the journal receiver is saved so less media storage is required to save a journal receiver. 2. System Change-Journal Management Another new option available on the CRTJRN and CHGJRN commands is the option to have the system create and attach a new journal receiver when the attached receiver reaches a specified threshold value. This is the MNGRCV(*SYSTEM) option and is called system change-journal management. With this option, the actual time at which the Change Journal is performed is no longer predictable because it is performed by the system _ not the user. If the system is changing receivers, the receiver threshold message (CPF7099) is no longer sent to the threshold message queue. If the system encounters a lock conflict while trying to change receivers, the system will retry the change every 10 minutes. Additionally, when the system changes receivers, the actual CHGJRN behaves a little differently. When the actual CHGJRN is performed, there is no longer a performance I/O spike. Because of the implementation of this system change-journal management function, it may be some time after the CHGJRN before the receiver can be deleted (even if all tables under commitment control are at transaction boundaries). For some journal users, it may be easier for the user to know when a CHGJRN can be performed than for the system to find the window of opportunity. One case would be if Receive Journal Entry (RCVJRNE) command is being used. In these cases, the user may want to continue managing the receivers, if the user also wants to save the journal receiver at CHGJRN time. 3. Automatic Delete of Journal Receivers The last new option on the CHGJRN and CRTJRN commands is an option to have the system delete detached journal receivers when they are no longer needed for IPL recovery. With this option, storage for the receiver is freed as soon as possible. If a table is logged only to provide access path journaling or for commitment control reasons, then this option may be useful. If a table is being journaled to provide for an audit trail or disaster recovery (via APYJRNCHG), then this option should not be used. In the latter two cases, the user would want to save the receiver before it is deleted and there is no guarantee that the receiver can be saved before it is deleted. This option can only be used if the system is also performing system change-journal management for the journal. Now that there is an easy way to predict how painful an abnormal system end can be (EDTRCYAP), and there are two easy ways to limit your exposure (SMAPP and explicit access path journaling). There is no need for you to see a lengthy IPL after an abnormal system end. Give the new function a try, either SMAPP, explicit access path journaling, or a combination of both, and sleep a little easier at night. Written by Mike Groeschel, DB2/400 development (507) 253-1032; and Peg Levering, DB2/400 development (507) 253-4256, IBM Rochester, AS/400 Division. .pa DB2 SMP FOR OS/400 NOW AVAILABLE In today's business world, fast information retrieval can mean the difference between success and failure. Corporations are looking to the enterprise data management system to rapidly transform huge amounts of data into valuable business information. The DB2 Symmetric Multiprocessing (SMP) for OS/400 feature enables fast information retrieval on the AS/400 business computing system. This parallel database feature includes several leading-edge query optimization techniques that were designed for high-speed parallel processing. The DB2 SMP for OS/400 enhancement is a licensed feature of OS/400 (5763-SS1); OS/400 Version 3 Release 1 Modification 1 is the only version currently supported. Worldwide general availability for this feature occurred in January 1996. Documentation for this feature can be found in Technical Newsletter, SN41-3680-00. This documentation will also be incorporated into the latest softcopy versions of the DB2 for OS/400 SQL Programmer's Guide. Informational APAR, II09006, should be referenced for the latest DB2 SMP enhancements. Please send your comments and questions to db2400@vnet.ibm.com. Submitted by Kent Milligan, DB2/400 Development, IBM Rochester, AS/400 Division. He can be reached at (507) 253-5301 or by e-mail at kmill@vnet.ibm.com. .pa USING TYPE 2 INDEXES IN DB2 FOR MVS/ESA VERSION 4: Increase Availability Through Better Index Management Interest in DB2 Version 4 just keeps growing. In fact, there is such a wealth of function in Version 4, it would be fun and beneficial to discuss it all. For now, let's focus on the type 2 Index Manager. Using type 2 indexes is one area of DB2 Version 4 that has immediate benefits. However, it definitely needs a bit of planning to implement successfully and requires some understanding of why the design changes are important and how to take advantage of them. First, the type 2 index manager is neither a replacement nor an enhancement to the current index manager. The type 1 index manager is still available, and DB2 will support both index manager subcomponents in Version 4. The current release of the DB2 for MVS/ESA Version 4 Release Guide (SC26-3394) states that the type 1 index manager "could be" eventually replaced. In light of this, there is little reason not to convert to type 2 indexes once your Version 4 installation has stabilized. There are several important reasons to convert to the type 2 index manager. The new keyword WHERE NOT NULL on the SQL CREATE INDEX UNIQUE statement, for example, allows multiple null values within a type 2 unique index. Row level locking is now available in DB2, but only if all indexes on table spaces defined with LOCKSIZE ROW are type 2 indexes. A dirty read is also available if the new isolation level uncommitted read (UR) is specified. Again, this requires a type 2 index if the access path includes an index. CPU parallelism and part 2 of partition independence are both available with DB2 Version 4. However, these too will also require a type 2 index. The Index Design The elimination of locks and subpages are two significant design enhancements that affect both concurrency and availability. No longer locking at any level of the index structure, the chances for deadlocks and time-outs caused by the index greatly decreases. DB2 uses a technique called lock avoidance (introduced in DB2 Version 3). This technique uses latches for index access, thus avoiding the expense and problems associated with locks. Type 2 index no longer has subpages; SUBPAGE parameter has been removed. This will end all of the discussion of whether SUBPAGE 1 or SUBPAGE > 1 should be coded. It will also end all of the excessive locking the type 1 index manager had to go through when a SUBPAGE > 1 value was specified. What happens when an insert is performed to a table that has a type 1 index defined? Because the leaf page keys are maintained in order, the index manager must ensure the new key is inserted in the correct place. If keys 1 and 3 already exist and key 2 is inserted, DB2 places key 2 between key 1 and 3. (It must shift all the key/RID combinations, starting at key 3, to the right to make room). This is also true if an additional row is added for an existing key value in the index. This insert will cause a RID to be added to the RID list of the existing key. The type 1 index inserts the new RID at the beginning of the RID list, immediately following the key. All the RIDS for this key and all key/RID combinations following the RID being inserted are once again shifted to the right. If this index was defined with a SUBPAGE value greater than 1, both of these processes could cause the index manager to have to "reorganize" the subpages to redistribute the keys evenly across all subpages. DB2 is doing a lot of work to perform a simple insert. How does the type 2 index manager compare? With a type 2 index, ordered keys are no longer maintained. New keys are inserted in the first available freespace in the leaf page. This is accomplished by maintaining a key map at the bottom of the leaf page similar to the map ids found in a data page. These key maps are offsets to the actual keys in the leaf page. Because the key map is kept in sequence, the keys no longer have to be. Having to maintain the sequence of the key map offsets rather than the key/RID combinations will help to improve the performance in insert processing by eliminating the movement of large amounts of information within the leaf page. (See figure 1.) The type 2 index manager also takes advantage of RID chain pointers to help avoid key/RID movement in the page. If there is no room to add an additional RID to the RID list because another key follows the last RID in the list, the type 2 index will change the last RID to a pointer to another location on the leaf page. Here it will set a pointer/RID combination that points to the new RID just inserted. (See Figure 3.) All further inserts will result in additional pointer/RID combination being added to the leaf page. A RID counter preceding the key is also changed to a negative value that represents the number of remaining RIDs. There will never be more than one RID associated with the pointer. DB2 does have an internal threshold that controls how long this chain can get before DB2 "reorganizes" the leaf page. The next problem addressed by the type 2 index is delete processing. A key, with its list of RIDs, exists in a non-unique type 1 index. Because of the way the type 1 index performs an insert, the RID list is unordered. If an application wants to delete a row using a cursor, a delete operation will affect all indexes on the table from which the row is being deleted. DB2 must serially scan the RID list to locate the RID it needs to delete. After deleting the RID from the list, DB2 must shift all the RIDs that follow the point of deletion to the left to fill the hole left by the departing RID. If enough RIDs are deleted, the actual number of index levels could be reduced at the instant of deletion. How does the type 2 index help? When RIDs are added to a key in a type 2 index, the RIDs are maintained in order. (See Figure 1.) When a delete using a cursor is performed, a binary search can be used to search the RID list. This is far more efficient than a serial search. An even more significant improvement is the addition of a flag byte preceding each RID. If the table space LOCKSIZE is ROW or PAGE and a RID is deleted, the type 2 index manager will turn the pseudo delete bit on rather than physically removing the RID. It is the first bit in the flag byte. This makes the RID unavailable and avoids the overhead of actually deleting it. In addition, if a rollback occurs, the bit can simply be turned off. In a type 1 index, the RIDs would have to be re-added to the index. There are two other major design improvements. In the non-leaf page, a type 2 index only stores enough of the key to identify the high key value on the previous leaf page. This is called key truncation. For example, say a high key on leaf page 10 is Karen, and the first key on leaf page 11 is Lauren. DB2 only needs to know that everything on the previous leaf page is less than the letter "L" so only the "L" needs to be stored in the non-leaf page. In addition, if the RIDs for a non-unique index key spans multiple leaf pages, a type 2 index stores the high RID value for the last page with the key in the non-leaf page. This gives DB2 the exact leaf page to start reading instead of having to scan multiple leaf pages. Page Splits Some applications need to deal with ascending sequential keys. If the application is using a type 1 index and fills a leaf page, half of the keys would remain in the original leaf page while the other half would be moved to a new leaf page. Because the keys are ever increasing, the original, now half full leaf page will never have another key inserted into it. This index, which can quickly become an index space consisting of a lot of half-utilized leaf pages, will eventually require reorganization. The DB2 REORG will have to be scheduled to remove the free space making the index space unavailable for some period of time. A type 2 index has a different approach. When a leaf page becomes full, the index manager is aware that the key being inserted is higher than any key in the index. DB2, rather than perform a page split, creates a new leaf page and inserts the new key. This eliminates the creation of half-full leaf pages. If the key inserted is not greater than the highest key, all works as it did in the past. A page split occurs moving half the keys to the new page. This can turn out to be a "good news, bad news" joke. Often, to avoid the type 1 index problem described above, an additional column (an inverted timestamp for example) is added to the front of the key. This causes a randomizing effect, spreading the keys around all of the pages and avoiding the page split situation that leaves half-full pages with space that cannot be used. If that same modified key is used with a type 2 index, the page split that the index is designed to avoid now occurs. You will not be able to take advantage of the new method for splitting pages. New CREATE/ALTER Syntax There has been only a minor change to the syntax of the SQL CREATE INDEX statement: the addition of the TYPE keyword. When creating a new index, you can specify TYPE 1 or TYPE 2. There is no default for TYPE; instead, it is decided at installation. Whatever value is specified for option 8 on install panel DSNTIPE, "Default Index Type" will be used if the TYPE keyword is not specified at CREATE time. The installation default is type 2. When first migrating to Version 4, you may want to consider specifying type 1 during installation. Type 2 indexes can then be explicitly specified. This would make fallback to Version 3, if necessary, a little easier. Once DB2 Version 4 has stabilized, and there is little chance of a fallback, change the installation value to type 2 using the install panels or modify the DSNZPARM DEFFIXTP on the DSN6SPRM macro. Next, we can look at the SQL ALTER INDEX statement, something very important during your migration. As you become comfortable with Version 4 and the chances of fallback decrease, you will want to convert your existing indexes to type 2. The ALTER INDEX has a new CONVERT TO clause that will convert the DB2 Catalog information describing the index from type 1 to type 2 or type 2 to type 1. Altering an index to type 2 will also modify the Database Descriptor (DBD) that contains that index. The actual index structure is not changed until RECOVER INDEX is run. The REORG Utility cannot be used for this because ALTER INDEX leaves the index in a recover pending (RECP) state. REORG will not run against an object in RECP. DB2 has also added an option to CATMAINT, the catalog conversion utility, that will completely convert all of the Catalog and Directory indexes to type 2. Be careful about converting your DB2 Catalog and Directory indexes to type 2 too soon. If a type 2 index exists on the Catalog, and fallback to Version 3 is required, Version 3 will not start because it will find an invalid DBD structure for Catalog. You could end up with a Version 4 and a Version 3 system that cannot be started. Recommendations Making a recommendation for type 2 indexes is easy. We've all been waiting on them for some time now, so use them. If one or more of these new features is important to your organization, then no decision is necessary. Next, check out indexes that are causing deadlock or time-out problems. Because the type 2 index does not use locks, it may give you some relief from these problems. Also consider converting any indexes with very long RID lists or synonym chains. Measurable gains will be realized by converting these indexes. As you become more comfortable with type 2 indexes, look at converting the rest of your non-unique indexes. Finally, convert your unique indexes. Other than easing an index locking problem, converting a unique index could cost you additional space while giving you little improvement over a type 1 index. And be conscious of your current space requirements when converting any of your type 1 indexes. (See figure 2.) A worst case example would be a unique type 1 index with SUBPAGE 1 on a 1,000,000 row table. Converting it to a type 2 index will cost you 2 bytes per key (2,000,000) and 1 byte per RID (1,000,000). That's 3,000,000 bytes of additional overhead. So calculate your new space carefully. Be cautious, of course. Remember that a type 2 index is not valid in Version 3. If you create a type 2 index on an object and then fall back to Version 3, the index will not work. Keep track of which indexes you convert or create as type 2. Be especially careful when creating type 2 indexes or converting to type 2 indexes on the DB2 catalog or directory, as mentioned earlier. The type 2 index looks great. It will help solve many concurrency problems caused by indexing, improve index availability, and possibly give you a small performance boost for inserts and cursor deletes. And like most things in DB2, it will just keep getting better and better. So put "convert to type 2 index" on the top of your Version 4 migration plan. It will be well worth it. Written by Willie Favero, BMC Software, education and services group. He has been a database professional for more than 20 years, the last 12 years dealing primarily with DB2. Most of this time was spent as a senior instructor for IBM. Favero, the author of numerous articles, has contributed to several of IBM Redbooks, and he regularly speaks at national technical conferences and area user group meetings. .pa SOMETHING OLD, SOMETHING NEW FOR EXISTING AND POTENTIAL DRDA SITES Several users have asked if DB2 Common Server databases (DB2 for OS/2, DB2 for AIX, DB2 for HP/UX, DB2 for Windows NT, etc.) can be accessed by host DB2 family members (DB2 for MVS, DB2 for VSE&VM, and DB2 for OS/400). The answer is YES! Host DB2 can function as the Application Requestor (AR) while DB2/CS Version 2.1 databases provide the Application Server(AS) capability. For example, a program running in DB2 for MVS can retrieve data stored in DB2 for AIX. Only DB2 for VSE (one half of SQL/DS) can't play, since DRDA AR support hasn't been included in that product yet. What are some practical applications of this technology? We're currently working with one U.S. government agency that is organized as several discrete operating units called bureaus. Each has its own programs and objectives. In this case, it makes sense to develop databases and applications at the bureau level to provide the maximum amount of flexibility and responsiveness to local user requirements. On the other hand, this agency has several enterprise-wide challenges. There are many standard coding and abbreviation data stores that are used throughout the agency: country abbreviations, language codes, and so forth. An even more critical need is to answer congressional inquiries like "How many programs has this agency sponsored in my district last year?" These types of questions generally arise during the budget review cycle, and if you're perplexed as to why anyone would ask such a thing, you lead a very sheltered life. The database architecture would be formed as follows: agency-wide data stored on DB2 for MVS, operational data in a DB2 for OS/2 instance for each bureau. Bureau application programs would use DRDA to access a reference table in DB2 for MVS as needed; otherwise, they would locally process transactions. When senior management must address cross-agency issues, an MVS program could serially access the bureau databases to gather the necessary information. In this scenario, the client/server relational database serves as the mission-critical system-of-record while the host relational database provides the decision-support capability. When you begin considering the opportunities this technology presents to the data-management specialist, you'll wonder how we ever succeeded in the old unary database world. The April 2, 1996 DB2 for MVS/ESA V4.2 Preview Announcement (296-107) contained several welcome DRDA enhancements: * Client-to-host DRDA connectivity across native TCP/IP. Accessing DB2 for MVS using DRDA will no longer require an intervening Distributed Database Connection Services (DDCS) gateway platform. Clients can connect directly to the DB2 Distributed Data Facility (DDF) and won't need Anynet to provide an SNA "envelope" if you use TCP/IP as your enterprise communications protocol. One customer needed DRDA functionality, but didn't have the staff or skills to perform the care and feeding of another operating system like AIX or OS/2 in order to run DDCS on a gateway server. This announcement significantly reduces DRDA's complexity and "liveware" resource requirement. * Distributed Computing Environment (DCE) tickets can be used for authenticating DRDA clients. When accessing a remote DRDA Application Server using Client Application Enabler (CAE) on DOS-Windows, Windows NT, OS/2, and Macintosh, an end user enters a userid and password usually, though not always, validated by the AS security system. A Windows V3.11 user accessing DB2 for MVS would probably key his/her host RACF password. Unfortunately, this character sequence has been sent in-the-clear up until now. A clever hacker or even a legitimate system administrator performing a necessary trace would be able to capture and read a user's RACF password. Under DB2 for MVS V4.2, that will no longer be the case. DCE will also simplify multiplatform userid and password administration, a topic for a forthcoming column. * Finally, an enhancement not directly related to DRDA that will deliver a huge benefit to client/server application developers... DB2 for MVS V4.2 Stored Procedures will now return relational database answer sets rather than a scalar parameter string. Programmers will no longer have the tedious responsibility of retrieving and reassembling the results of an SQL query submitted to a host database. Answer sets similar to those produced by declared cursors embedded in host language programs will be returned to the process initiating a Stored Procedure _ including client/server applications accessing remote databases using DRDA. DRDA is maturing as a key client/server database technology. Most enterprises need to implement decision support and transaction processing applications that access data throughout the enterprise. It might be time to seriously consider what DRDA brings to the table. DRDA enhancements at a glance... * Client-to-host DRDA connectivity across native TCP/IP. * Distributed Computing Environment (DCE) tickets can be used for authenticating DRDA clients. * DB2 for MVS Version 4.2 Stored Procedures returns relational database answer sets rather than a scalar parameter string. .pa SAVING TIME BY STARTING WITH THE SAMPLE APPLICATIONS FOR DB2 VERSION 2 Would you like to save 20 percent of your start-up time when coding new applications running against DB2 Version 2 for common servers? Now you can. Much of that wasted time is spent struggling with how to begin your application, especially when using a DB2 function that is new to you. Eliminating this wasted time by simply starting with a sample application that includes a demonstration of the function you want. Read on to find out where to find such sample applications, the languages these applications are available in, and how to use them to save valuable time. To illustrate, let's look at an everyday scenario for a developer. Carol Coderight has received international acclaim for developing a world-class sorting application called Sheep and Goats software. It is used throughout the dairy industry. Last week, her manager, Neville Cantankerous, said, "Carol, this Friday the president of Mister Bigcheese, our number one customer, is dropping by and would like to be shown a small application that he could use to monitor his database for performance when running your Sheep and Goats software. Could you throw something together for him? Thanks. See you at the meeting in my office on Friday at 9:00. Bring your laptop with you to demo it." "Oh, that Neville Cantankerous!" Carol muttered later as she fired up her RISC/6000 in her cubicle. "He's been in management too long! If he only knew the problems of starting to write code from scratch! What I really need to begin is ..." "A sample application," Neville Cantankerous said, peering over the top of her cubicle. "I discovered several today while leafing through a DB2 manual during one of our management meetings. In fact, the appendix of the Application Programming manual lists the set of them. There's tons of them! There's got to be one you could start your application from. Well, got to get back to my meeting." And Neville Cantankerous rushed off to his meeting, leaving the Application Programming manual on Carol's desk. Carol soon discovered that her manager was not so far removed from reality after all. In the "Sample Programs and Extra Examples" appendix, she found over 60 sample applications that demonstrated embedded SQL and the APIs. Another 60 demonstrated the use of SQL through the Call Level Interface (CLI). Moreover, the sample applications contained source code, free, in a variety of languages for DB2 users. Specifically, the sample applications were coded in C, C++, COBOL, and Fortran. Since Carol needed to monitor performance and code her application in C, the db2mon.c program caught her eye. Its description read: Demonstrates how to use the Database System Monitor APIs and how to process the output data buffer returned from the Snapshot API. "I'll believe it when I see it," said a still somewhat skeptical Carol, "I'll probably never find it in the system, and if I do, it will be so simple it won't be of any real use." So naturally Carol was surprised and then delighted when the appendix information brought her quickly to the subdirectory (/sqllib/samples/c) where she browsed the db2mon.c file. This was no simple application. It was, in fact, several hundred lines long and included all the APIs she herself had been considering! Carol noticed that this sample application, like all the applications that come with DB2, contained the following valuable descriptive information in the header at the top of the file: HEADING INFORMATION PURPOSE Describes the purpose and behavior of the application. MAKE Tells how to create an executable file from the source. RUN Tells how to run the executable. DEPENDENCIES Lists the external dependencies such as the compiler expected. Carol also noted that the sample application contained lots of comments in the source file to indicate what was occurring at that point in the program. (See figure 1) "Pretty impressive," Carol had to admit. "But let's see what it does!" That was easy to do because, as the header information explained, all she had to do was type "bldmon db2mon.c" to produce an executable file. Carol created the executable, ran the application, and, after looking at the output of the program, realized that it contained 90 percent of the type of information that the president of Mister Bigcheese would need. From that point on, Carol's assignment was much less daunting. She copied the sample application, made a few modifications for her client, and presented her application to the president of Mister Bigcheese the following Friday. For his part, he was impressed that she could code a "complicated application in the completely unreasonable time I requested!" Afterwards, Neville Cantankerous dropped by her cubicle with an award: a framed photocopy of the "Sample Programs and Extra Examples" appendix. Carol Coderight has been actively using that award ever since. Written by Gary Bist, a DB2 Education and Training course developer. He can be reached by phone at (416) 448-2507 or by e-mail at bist@torolab4.vnet.ibm.com. .pa PLUG INTO THE POWER OF SMARTSORT Just as the high-tech food processor has revolutionized the food preparation process for millions with fast and efficient functions (chopping, cutting, dicing, shredding, etc.), SMARTsort offers you high-tech power and flexibility to sort, merge, copy, filter and check your data quickly and efficiently. In fact, manipulating your data has never been easier. You will be amazed at the speed of SMARTsort, and impressed with the capabilities it puts at your disposal to arrange your data in meaningful ways. Just plug in SMARTsort and drop in your data. Use SMARTsort to process your data, and out comes a new and meaningful view of your data that you can really use. Use SMARTsort to create, update and read files from different computing systems and file organizations. SMARTsort can handle various record structures and data types, and process data in more than forty national languages. SMARTsort can also process extremely large files. In fact, you are limited only by the amount of data your operating system can handle. SMARTsort can be invoked via a command-line interface or the Application Programming Interface (API). SMARTsort V1.2 runs on AIX 4.1, OS/2 Warp and Windows NT. What is the DB2/SMARTsort Accelerator? Good news for those of you running DB2 on AIX! SMARTsort can now work in conjunction with DB2 2.1.1 to speed up the DB2 LOAD with the Index Create function. Once SMARTsort is enabled under DB2, SMARTsort is automatically called every time the DB2 LOAD with Index Create function is invoked. In fact, all programs, scripts or command line calls that use DB2 LOAD with Index Create can take full advantage of this performance booster. When large amounts of data are processed, improvements of over 70 percent in the sorting performance and close to 40 percent of the overall LOAD performance have been observed. You also have the option to produce a log of all SMARTsort invocations made by DB2, or you can choose not to create such a log and allow all DB2 calls to SMARTsort to be transparent. Note that writing to a log will affect the overall performance of SMARTsort, but it is useful for purposes such as debugging. Either way, jobs using DB2 Load with Index Create can exploit the performance advantages of using SMARTsort, without any additional setup on your part. Sounds Great! How Do I Start Using the DB2/SMARTsort Accelerator? To plug into the power of SMARTsort, you need to have DB2 service pack U441788 installed. To enable SMARTsort under DB2, simply add the DB2SORT environmental variable to your .profile or db2profile file. As with other DB2 environmental variables, the DB2SORT environment variable must be exported before starting DB2. (See figure 1.) Alternatively, you can directly export the DB2SORT environment variable via the command line. For DB2 to pick up the updated environmental variable, you must issue three other commands: db2stop, db2 terminate and db2start. (See figure 2.) By exporting the optional DB2SORTLOG environmental variable, you can collect a log of all DB2 invocations of SMARTsort. (See figure 3.) DB2SORTLOG should be set to the name of a directory that SMARTsort can write to. Each time DB2 invokes SMARTsort, SMARTsort makes an entry to the smrtsort.log file, located in this directory. If no directory name is specified, or an invalid directory name is provided, no log file is created. The smrtsort.db2 file, located in /usr/lpp/smartsort. (See figure 4.) shows how the DB2SORT and DB2SORTLOG environmental variables can be coded in a functioning Shell Script. You can call this Script directly from your db2profile or .profile file. How Else Can I Plug into the Power of SMARTsort? Since SMARTsort was developed with the intent to comply with X/OPEN syntax, you can easily replace your AIX system's sort utility with SMARTsort. Making SMARTsort your default sort program can provide you with better performance and capability over the default sort program provided with the AIX operating system. Additionally, programs written in C, C++, COBOL, FORTRAN or PL/1 can also use the SMARTsort API to dynamically invoke SMARTsort functions. There is no need to make any changes to your application program if it is ported to another SMARTsort supported platform, because the critical program interface to SMARTsort is the same on all platforms. If you are a sophisicated user, you can take advantage of SMARTsort's user-written input/output Exit-routines for the sort and merge functions, allowing you to interact with SMARTsort at the record level. DB2SORT=SMARTSORT export DB2SORT Figure 1. DB2SORT environmental variable via .profile or db2profile export DB2SORT=SMARTSORT db2stop db2 terminate db2start Figure 2. DB2SORT environmental variable via command line DB2SORTLOG=/tmp export DB2SORTLOG Figure 3. DB2SORTLOG environmental variable # Licensed Materials: Property of IBM # 5765-349 # (c) IBM Corporation 1994, 1996 # All Rights Reserved # # NAME: smrtsort.db2 # # DESCRIPTION: This partial script sample # demonstrates how to enable SMARTsort under DB2. # # USAGE: The contents of this script can be added to # a user's db2profile file to enable DB2's use of # SMARTsort during the LOAD operation. # # This script can also be invoked directly after # invoking the db2profile. # # A user may want to copy this script into their # directory structure and customize it. # #______________________________ #______________________________ # DB2SORT Default=ú # is set to SMARTSORT to enable the use of # SMARTsort during DB2 LOAD processing. #______________________________ DB2SORT=SMARTSORT export DB2SORT #______________________________ # DB2SORTLOG Default=ú # is set to a directory name in which SMARTsort # is to create a log file (called smrtsort.log). # Each time SMARTsort is invoked by DB2, # SMARTsort will write a message to this log file. # See the SMARTsort Guide and Reference #for more information. #_______________________________ #DB2SORTLOG=/tmp #export DB2SORTLOG Figure 4. Shell Script Coding for DB2SORT and DB2SORTLOG HOW CAN I LEARN MORE ABOUT SMARTSORT? For the latest information on SMARTsort or if you have any questions or comments that you would like to pass on to us . . . Visit our home page on the World Wide Web at URL: http://www.storage.ibm.com/storage/software/sort/srtshome.htm Contact our Hotline via E-mail at: smrtsort@vnet.ibm.com To order a copy of SMARTsort, contact your IBM representative or, within the United States, call 1-800-IBM-CALL. Written by Jane Shen, Sort Products, IBM San Jose. She can be reached (408) 256-2743 or by e-mail at jshen@vnet.ibm.com. .pa BENCHMARK NEWS * IBM ACHIEVES INDUSTRY LEADING TPC-C BENCHMARK RESULTS * IBM PUBLISHES FIRST EVER TPC-D BENCHMARK ON WINDOWS NT * IBM PUBLISHES LARGEST (300GB) TPC-D BENCHMARK TO DATE Transaction Processing and Complex Query Superiority Exhibited by DB2 Four record-setting benchmarks _ all utilizing IBM's award-winning DATABASE 2 (DB2) _ were achieved in April 1996 by IBM following industry guidelines established by the Transaction Processing Performance Council. In a recent test, DB2 attained industry's best-ever TPC-C price/performance result. DB2 also achieved best TPC-C performance and price/performance results on a Sun platform, significantly exceeding Oracle and Sybase numbers on similarly configured Sun systems. In addition, IBM published the industry's first TPC-D benchmark result on the Windows NT platform, as well as a TPC-D result on DB2 Parallel Edition using 300GB of data _ the largest TPC-D database tested by any vendor. TPC-C: New World Record The first TPC-C benchmark, running DB2 on Sun's dual processor Ultra Enterprise 2 server, established a new world record in price/performance. With a throughput of 3107.17 tpmC and $140.52/tpmC, DB2 beat its competition in speed as well as price/performance. The second TPC-C benchmark was run on Sun's SPARCserver 2000E with sixteen 85 MHz SuperSPARC CPUs. DB2 database server software, running on the Solaris 2.5.1 symmetric multiprocessing (SMP) operating system, delivered results of 6444.63 transactions per minute (tpmC) at a price/performance of $201 per tpmC. When compared with published results of Oracle 7.3.2 and Sybase 11 on similarly configured Sun systems, DB2 performed 25.76 percent faster in throughput than Oracle and 41.81 percent faster than Sybase. DB2 is also 38 percent better in price/performance than Oracle and 49 percent less expensive than Sybase. The TPC-C benchmark simulates an order entry environment for a warehouse operation. The benchmark has gained industry acceptance because it is considered to be similar to "real world" processing. By simulating the activities found in complex online transaction processing (OLTP) application environments, the TPC-C benchmark exercises a system's breadth of components associated with such environments. TPC-D Results on Windows NT IBM is also the first to announce TPC-D benchmark results on Windows NT. This benchmark utilized an IBM PC Company 200 MHz Pentium Pro uniprocessor system, IBM PC 360 S200 with DB2 for Windows NT. DB2 for NT delivered a power metric of 44.9 QppD@1GB, a throughput metric of 15.1 QthD@1GB, and a price/performance metric of $1074 per QphD@1GB. The second TPC-D benchmark, using 300GB of data and DB2 Parallel Edition, was conducted on a 96-node RISC System/6000 Scalable Power Parallel System 403. DB2 PE delivered a power metric of 835.6 QppD@300GB, a throughput metric of 364.0 QthD@300GB, and a price/performance metric of $32,202 per QphD@300GB. The TPC-D benchmark simulates a complex query application. TPC-D is a Decision Support benchmark containing a suite of business-oriented queries and concurrent updates. The queries and the data populating the database have been chosen for their broad industry-wide relevance. This benchmark illustrates Decision Support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions. For further information, contact Susan Scott-Ker at IBM media relations (914-766-1463, Internet: susansk@vnet.ibm.com) or Parna Sarkar at Brodeur & Partners (617-622-2833, Internet: psarkar@brodeur.com). Submitted by Michael J. Swift, who can be reached at (408)463-4105, by e-mail at mswift@vnet.ibm.com or by fax (408)463-4633. .pa SERVICE PAK INFORMATION REFRESHES AND FIXPAKS AVAILABLE FOR DB2 CLIENT SERVER PRODUCTS DB2 Client Server Version 2.1.1 Refreshes As promised, we refreshed all DB2 Client Server products in March 1996 to add Version 2.1.1 support for the following languages: Japanese, Spanish, French, German, Italian, Brazilian-Portuguese, Korean, and Simplified Chinese. In April 1996, we refreshed the products once again to bring DB2 Client Server Version 2.1.1 to the following countries: Sweden, Norway, Finland, and Denmark. Along with these refreshes, we also provided FixPaks/ServicePaks for current DB2 customers to upgrade. For those customers who are still on Version 2.1.0, we strongly recommend you upgrade to Version 2.1.1 as soon as possible to take advantage of the new features and fixes. For those customers who are already on Version 2.1.1, it is always a good idea to upgrade to the latest code level. Listed in the chart below are the PTF numbers for the April refresh. How to get DB2 Client Server FixPaks/ServicePaks DB2/OS2 and SDK/Win FixPaks: They can be downloaded electronically from the following locations: (1) CompuServe - Execute GO IBMDB2. - Then go to the DB2/OS2 library to find the DB2/OS2 FixPak. - Or go to the CLIENTS library to find the SDK/Win FixPak. (2) Internet - FTP to anonymous server ftp.software.ibm.com (previously known as ps.boulder.ibm.com) at 198.17.57.66. - Then go to ps/products/db2/fixes//, where is the country's language (for example, english-us, spanish, german, etc.), and is the product name and version (for example, db22v21, db2winv21, etc.) to find the FixPaks. - Use a World Wide Web (WWW) browser to connect to the DB2 Service and Support Home Page.(http://www.software.ibm.com/data/ db2/db2tech/index.html) (3) IBM PCC BBS (in US) - Call (919) 517-0001 (in Raleigh). - Then type "db2" on the main menu to find the FixPaks. The above are the primary locations where these FixPaks are uploaded. They may also be available on other Bulletin Boards (such as Talklink OS/2 BBS in the U.S. and OS/2 BBS in Canada). If you do not have access to any of the above locations, please call 1-800-992-4777 to request that these FixPaks be sent to you in the mail. For countries other than U.S. and Canada, please look at your local IBM OS/2 BBS or call your local DB2 Customer Service number for assistance in obtaining them. DB2/AIX ServicePaks: They can be downloaded electronically from the following Internet location: - FTP to anonymous server ftp.software.ibm.com (previously known as ps.boulder.ibm.com) at 198.17.57.66. - Then go to ps/products/db2/fixes//, where is the country's language (for example, english-us, spanish, german, etc.), and is the product name and version (for example, db2aixv21, db2pev11, etc.) to find the ServicePak. - Use a World Wide Web (WWW) browser to connect to the DB2 Service and Support Home Page. (http://www.software.ibm.com/data/db2/db2tech/index.html) OS/2 PLATFORM (DB2/2, SDK/2, CAE/2, and DDCS/2) PTF Number Language WR08090 English WR20596 Danish WR20597 French WR20598 Japanese WR20599 Norwegian WR20600 Simplified-Chinese WR20601 German WR20602 Korean WR20603 Brazilian-Portuguese WR20604 Spanish WR20605 Finnish WR20606 Italian WR20607 Swedish WINDOWS PLATFORM (SDK/Win and CAE/Win) PTF Number Language WR08091 English WR20608 Danish WR20609 French WR20610 Japanese WR20611 Norwegian WR20612 Simplified-Chinese WR20613 German WR20614 Korean WR20615 Portuguese AIX PLATFORM (DB2/AIX, SDK/AIX, CAE/AIX, and DDCS/AIX) PTF Number Language U442530 All supported languages If you do not have access to any of the above locations, please call 1-800-237-5511 to request that this ServicePak be sent to you in the mail. For countries other than the U.S. and Canada, please call your local DB2 Customer Service number for assistance in obtaining this ServicePak. .pa DB2 WORLDWIDE EVENTS Look for DB2 at these Upcoming Events! DB2 Technical Conference October 14 to 18, 1996 Miami Beach, Florida Data Warehouse Technical Conference October 14 to 16, 1996 Miami Beach, Florida IBM VSE and VM Conference October 14 to 18, 1996 LaHulpe, Belgium IDUG European Conference October 21 to 24 Amsterdam, The Netherlands IDUG Asia Pacific November 13 to 15 Melbourne, Australia .pa NEWSLETTER INFORMATION This quarterly newsletter is produced by the Software Marketing Centre at the IBM Software Solutions Toronto Laboratory. For further information on any of the products mentioned, contact your local IBM office or an authorized IBM Business Partner. Don't hesitate to contact us about newsletter content, subscriptions, or article ideas in one of the following ways: INTERNET: db2news@vnet.ibm.com COMPUSERVE: Enter, GO IBMDB2 IBM VNET: TOROLAB2(DB2NEWS) FAX: (905) 316-4733 .pa DO YOU HAVE PRODUCT ENHANCEMENT IDEAS? There are two ways to let us know! * Discuss your requirements with your IBM representative, and have him or her submit your requirements to the Database Technology group at TOROLAB2(DBMREQ). * Or, send your detailed requirements to IBM using the Reader's Reply form. INTERESTED IN RECEIVING BACK COPIES OR A SUBSCRIPTION TO THE NEWSLETTER? * Indicate your interest on the Reader's Reply form and mail or fax it back to us. The next newsletter will be available in September 1996. Sample code is provided for information purposes only, and is used by readers at their risk. IBM makes an effort to provide accurate and safe code examples, but does not guarantee their correctness. copyright IBM Corporation 1996.