Table of contents
1.0 About this release
2.0 Product support overview
3.0 Notices and trademarks
1.0 About this
release
IBM(R) Rational(R) Data Architect Version 7.0 contains enhancements and fixes to the version 6.1 release.
Back to the table of contents.
1.1 New in this
release
IBM Rational Data Architect Version 7.0 introduces new features for the
following components:
Database connectivity
New data sources
The following data sources are new in this release:
- DB2 Version 9 for Linux, UNIX, and Windows
- DB2 Universal Database for iSeries Version 5 Release 4
- Derby Version 10.1
- Informix Dynamic Server Version 10.1
- Microsoft SQL Server Enterprise 2005
- Sybase Adaptive Server Enterprise Version 15
- MySQL Versions 4.0 and 4.1
- Generic JDBC
Support for Windows/client authentication
There is a new Use client authentication check box on the Connection Parameters page of the New Database Connection
wizard that allows you to use Windows or client authentication when you
are connecting to a DB2 UDB for Linux, UNIX, and Windows database.
Data model import/export
There are two new methods for data model import and export:
- Import and export of logical and physical data models using the Data Model
Export and Data Model Import wizards
- Import and export of glossary and physical data models using the Export
Model to Metadata Server and Import Model from Metadata Server wizards
Data Model Export and Data Model Import wizards
Using the new Data Model Export and Data Model Import wizards, you can
import and export logical and physical data models from Rational Data Architect
to supported tools. This feature was also available in Rational Data Architect
Version 6.0.0.1. The following data model formats are supported:
- CA ERwin, Version 3.x (ERX format)
- CA All Fusion ERwin Data Modeler, Version 4.x (ER1 format)
- CA All Fusion ERwin Data Modeler, Version 4.x (XML format)
- IBM(R) Rational(R) Data Architect (physical data models
and logical data models)
- IBM Rational Rose(R), Version 4.0 (MDL format)
- IBM Rational Rose Data Modeler (MDL format)
- Sybase PowerDesigner (physical data models and conceptual data models)
- Sybase PowerDesigner DataArchitect (physical data models and conceptual data
models)
To enable other import/export bridges, complete the following steps:
- Open the following file for edit: <RDA_installation_directory>\rda_prod\eclipse\plugins\com.ibm.datatools.metadata.wizards.miti.win32_1.0.0\MetaIntegration\conf\MIRModelBridges.xml
- Set the corresponding "enabled" attribute of the bridge that
you want to enable to "true".
The bridges that you enabled will appear in the Data Model Import and Data
Model Export wizards.
For more information on which bridges can be enabled, go to http://www.metaintegration.net/Products/MIMB/SupportedTools.html.
Documentation for this new feature is installed in the information center when you install this fix pack. To view the documentation, click Help > Help Contents to open the information center, click Creating data models and click Importing and exporting data models. For additional information, go to http://www.metaintegration.net/
Export Model to Metadata Server and Import Model from Metadata Server wizards
Using the Export Model to Metadata Server and Import Model from Metadata
Server wizards, you can transfer metadata between Rational Data Architect
version 7 and the repository of IBM WebSphere Metadata Server. You can
transfer glossary model metadata back and forth from Rational Data Architect
to the Metadata Server. You can also export physical data model metadata
from Rational Data Architect to the Metadata Server. To use this feature,
you must have Microsoft XML Core Services (MSXML) 4.0 Service Pack 2 installed
on the same computer and one of the following products: IBM WebSphere Information
Services Director; IBM WebSphere Information Analyzer; or IBM WebSphere
DataStage and QualityStage Designer.
Data diagramming
The following features are new for data diagramming:
- Print preview
- Select Zoom from the diagram palette
- Duplicate command
- Support for dragging and dropping attributes between tables and entities
- Use Appearance properties to override and control the default appearance
for the diagram
- New Documentation and Annotation Properties view fields
- New Data Diagrams folder in data design projects
Physical, logical, and storage data modeling
The following features are new for physical, logical, and storage data
modeling:
- URL support: (Add Data Object >URL). With this feature you can document model information and link to it
from the URL that is associated with the model.
- Implicit primary key support: You can mark a primary key object in a model as Enforced. If the primary key is not marked as Enforced, it is unenforced or implicit. During reverse engineering, you can specify whether or not to infer implicit primary keys from unique indexes.
- You can add a foreign key to a data model by using the Add Data Object context menu in the Data Project Explorer. Previously, this was not supported.
- DB2 for Linux, UNIX, and Windows Version 9 range partitioning support:
There are several new properties in the Properties view for DB2 for Linux,
UNIX, and Windows Version 9 tables that you can use to model partition
groups.
- DB2 UDB for Linux, UNIX, and Windows support for ORGANIZE BY DIMENSIONS: There is a new Dimensions tab in the Properties view for DB2 UDB for Linux, UNIX, and Windows tables
that allows you to specify columns as dimension columns.
- There is new support for the XML data type in logical data models.
- Apply Table Space wizard: Use this wizard to easily apply a table space
to multiple tables at the same time, or to create new table spaces for
multiple tables based on an existing table space in your model.
- Key migration Preferences page and options: Use the Key Migration page in the Preferences window to specify how you want to handle naming
conflicts during key migration. There is also a new prompt window that
opens every time naming conflicts occur, if the preferences have not been
set. Use the new window to specify how to handle naming conflicts on an
individual occurrence.
Glossary modeling
The following features are new for glossary modeling:
- New glossary model organization: In the previous release, glossary models
were organized in a flat format. In this release, glossary models can be
organized in a hierarchy. There is support for new glossary model elements
such as categories, terms, reference words, containing words, and status.
- New glossary model editor: The glossary model editor has been enhanced
so that you can modify the new hierarchical elements in the glossary model.
You can also still modify a flat glossary model in the editor.
- Naming standard content assist: Content assistant is now available to browse
glossary models that are associated with your project to help you easily
create standard and compliant names. Content assist is available from the
Properties view and also in data diagrams.
Data model transformation
There is new support for transforming UML models to logical data models,
or for transforming logical data models to UML models. To use these transformations,
create and run a UML-to-LDM or a LDM-to-UML transformation configuration.
There is also a supplied logical data model profile that can be applied
to UML models. This profile contains several stereotypes so that you can
mark up your UML model and controls how a UML-to-LDM transformation transforms
each model element into logical data model objects. This feature allows
you to integrate with UML models that you create in Rational Software Modeler.
Model reporting
You can now generate PDF reports for mapping models and glossary models.
Web reports are not yet supported for these model types. It is preferable
to use Adobe Acrobat Reader to display the published PDF file for the hypertext
links to work. On Linux, PDF Viewer will also work, but GGV does not support
hypertext links in the PDF file.
XML support
There is new support for XML in DB2 Version 9 for Linux, UNIX, and Windows:
- Support for the XML data type
- Support for XML schemas
- XML document validation
- Annotated XSD Mapping editor
Stored procedure support for the XML data type
- You can create stored procedures that contain XML data type parameters
or return XML data types.
- You can run stored procedures that contain XML data types as input or output
parameters.
Data Output View XML support
- You can view XML data type columns on the Results page.
- For any column that can contain XML documents, you can view the content
as a tree or the document text.
SQL builder XML support
- The XML data type is displayed anywhere that other data types are displayed.
- You can select XML functions in the Expression builder.
- You can run SQL statements that contain host variables where the column
associated with the host variable is an XML data type.
- You can insert or update column values when the column value is an XML
data type.
XML schema support
- From the Database Explorer, you can load existing XML schemas and XML schema
documents from the XML schema repository in the database and view properties
such as target name space or schema location.
- You can register a new XML schema with its corresponding XML schema documents
from the file system.
- You can drop XML schemas and XML schema documents from the XML schema repository
in the database.
- You can view and edit the source for XML schema documents that make up
an XML schema.
XML document validation from the table data editor
- You can edit and update an XML data type column.
- You can perform XML value validation for the XML document in the column
against a registered XML schema.
Logical data modeling support for the XML data type
- You can specify the XML data type for logical data model attributes.
Annotated XSD mapping editor
- You can use the Annotated XSD Mapping editor to create annotated XML schema
documents (XSDs) for instance document decomposition. Using the mapping
editor, you can graphically create XSD-to-relational mappings and then
generate the corresponding annotations in the source XSD files. You can
then use the workbench to register the annotated XSD files on a DB2 server.
The mapping editor simplifies the creation of these annotations, which
can be an error-prone task when performed manually.
The following features are new for the information integration mapping
editor:
- You can specify logical data models as a mapping model source or target.
- This feature is designed for reporting purposes, and script generation
is not supported for logical data model mappings.
- You can bookmark mapping lines.
- There is a new Documentation tab in the Properties view for mapping lines,
which you can use to annotate or document the mapping line.
- You can hide schema elements that are not of interest in the Mapping Groups
and Mapping Group Details views. This feature was previously only available
in the Mappings view.
- Tables are now organized in the mapping editor in the same way that they
are organized in the Data Project Explorer. This feature allows you to
more easily browse the tables in the mapping editor.
- There are tooltips available for mapping lines to allow you to easily see
the endpoints of the mapping line. Hover over a mapping line to see the
endpoint information.
You can set preferences for DDL script generation on the Code Template
page of the Preferences window. Use the Code Template page to add SQL statements
to the beginning or end of DDL scripts that are generated by the workbench.
When you set the statement syntax in the Preferences window, these statements
are automatically added to the generated DDL scripts so that you do not
need to modify the DDL script manually to add these statements.
You can run an SQL stored procedure that targets DB2 UDB for Linux, UNIX,
and Windows Version 8.2 or higher to capture tuning data. When you capture
tuning data for SQL procedures, the collected data is presented next to
the source code for each procedure. Application developers or database
administrators can use this data to more efficiently tune resource-consuming
statements or algorithms.
The following known problems have been fixed in this release:
- Compare and synchronization
After a synchronization, there are times while in the Structural View that
the changes in options do not correctly refresh.
When you compare an object in the Database Explorer with another object,
and then synchronize the information, the DDL that is generated is not
always accurate.
- Analyze impact
In some scenarios when you are performing an impact analysis the product
can shut down. This happens when you try to move or minimize the progress
dialog or the progress bar while the impact analysis is occurring. To avoid
this problem, do not move or minimize the progress dialog or the progress
bar during impact analysis, and make sure to save all of your work before
you perform an impact analysis.
Back to the
table of contents.
1.3 Known problems, limitations, and workarounds
The following information is the currently known limitations, problems,
and workarounds. The Rational Data Architect Support Web site also contains
technotes and service flashes that describe changes to the documentation
and known limitations and workarounds that were discovered after this document
was created. The Rational Support Web site address is: www.ibm.com/software/data/integration/rda/support/
- MySQL
Limited support for MySQL 4.1: The following properties are not displayed
correctly in the Properties view: unique index, auto increment columns,
column default value for NULL and binary. In addition, C procedures and
functions are not supported.
- Connecting to ODBC sources on Linux or Windows
-
Due to a JDK problem with previous releases, you might not be able to connect
to ODBC data sources using RDA on Linux or Windows unless you have DB2
Universal Database for Linux, UNIX, and Windows version 8.2 FixPak 11 or
later.
- Compare and synchronization
-
- Data diagrams
-
- The Delete from model action on a diagram shortcut object does not delete the diagram object
from the model. To work around this issue, you can delete the diagram object
from the model by using the Delete action in the Data Project Explorer.
- There are some limitations when saving large diagrams to an image file.
Sometimes when you save a large diagram as an image file (right-click in a blank area of a diagram, and select File > Save As Image File) the image file is not created. Entries are created in the log file if
logging is enabled, but there is no error message. To work around this
problem, you can break up the large diagram into smaller diagrams.
- On Linux operating systems, diagram print functions do not work.
- The new Zoom icon on the data diagram palette is not accessible using the
keyboard. To use this function with the keyboard, select the Zoom menu
action from the data diagram toolbar.
- Index partitions
-
For zSeries Version 8 Compatible Mode database only: The partition option (Use Partition) is not supported for index partitions. For index definitions, the Storage Group option is always used.
- Reverse engineering from a DDL file
-
For DB2 Universal Database for Linux, UNIX, and Windows and zSeries only:
By default, the parser assumes that the terminator is the semicolon (";").
If the file uses a different terminator character, you must include the
following statement in the first line of your DDL file:
-- <ScriptOptions statementTerminator="@" />
Where "@"
is the terminator character that your DDL file uses.
- DDL generation
-
- When you run Generate
DDL for a schema with nicknames, you might see a message in the
Data Output view similar to the following: Table xxx already has a
primary key. The message shown in the Data Output view can be ignored.
- DDL statements for stored procedures whose name requires a delimiter in the DDL statement (for example,
CREATE PROCEDURE "a.b"
) is not generated correctly. The delimiting quotation marks are not generated. In the example statement, the DDL is generated as CREATE PROCEDURE a.b
. To work around this issue, modify the generated DDL statement to include the delimiting characters.
- DDL parser for DB2 UDB for z/OS
-
The following DDL statements have limitations:
Statement |
Limitations |
ALTER TABLE |
The following alterations are not supported:
- ADD PARTITION
- ADD/DROP RESTRICT ON DROP
- DROP MATERIAliZED QUERY
|
SET CURRENT SQliD |
Only supported: SET CURRENT SQliD = string-constant |
SET SCHEMA |
Only supported: SET SCHEMA = schema-name, SET SCHEMA = string-constant |
- DDL parser for Oracle
-
- The REPLACE clause is not supported.
- TIMESTAMP is not supported as a data type when reverse engineering from
a DDL file.
- Server discovery
-
On a Linux operating system, the Undefined Remote Servers do not
appear for ODBC data sources, unless you create an ODBC wrapper with the wrapper
name ODBC outside of the Rational Data Architect product, such as the DB2
Universal Database Control Center, or a command line. You must name the wrapper
ODBC so that it is properly discovered. The wrapper on a Linux operating system
is defined with a MODulE wrapper option, as in the following example:
CREATE WRAPPER odbc liBRARY 'libdb2rcodbc.so' OPTIONS (MODulE '/usr/lib/odbc.so')
In this example, MODulE '/usr/lib/odbc.so' is the full path
to the library that contains the ODBC Driver Manager.
- Discover function and mapping editor
-
- The reference to data model files (DBM, LDM or XSD) in an MSL file is not
updated automatically when you copy, move, or import the data model files.
The mapping editor will not load the MSL file correctly if the reference
to the data model files is invalid. Update the reference manually in the
MSL file by opening it with a text editor (right-click the MSL file and
select Open With > Text Editor). Change the XML attribute "location" of the <msl:inputs> and <msl:outputs> elements to the correct path to the data model file starting with the project name (for example,
/myProject/SourceDB.dbm
).
- When you switch focus from a mapping line to a tree node in the mapping
editor, the property page is empty directly after the switch. To work around
this issue, select the tree node again to see the tree node properties.
- In the mapping editor for logical data models, relationship discovery finds
matches between package names if the packages contain entities that do
not contain any attributes. When you accept this match there is no mapping
line visible in the mapping editor. However, a report created from this
mapping will show an accepted discovered match between the packages. To
work around this issue, do not accept mappings between package names.
- There might be some cases when the Advanced Configuration wizard does not recognize your Wordnet installation. If this problem occurs, ensure that the WNHOME system variable is set for Wordnet. The variable should be set to the root directory where Wordnet is installed, for example,
C:\Program Files\WordNet\2.1
.
- If you run discovery with data sampling algorithms against a DB2 Version 9 for Linux, UNIX, and Windows database and discovery returns an error, run the following bind command from a DB2 command line on the database:
C:\SQLliB\bnd>db2 bind db2schema.bnd
- If you add a bookmark to a mapping line, the bookmark is indicated by an icon on the mapping line in the editor and Outline view, and the bookmark is also added to the Bookmark view. However, the screen reader does not read that there is a bookmark on a mapping line in the mapping editor. To work around this issue, you can use the screen reader to read the bookmarks in the Bookmark view.
- When you launch the Discover Relationships function, be aware that aliases are treated as tables. You should decide whether to include them in the set of source schemas, or the target schema when you define the scope of the discover function.
- For the algorithms that include data sampling, only the data in Oracle
and DB2 databases are sampled. To cache the sampled data, you must specify
a cache database. Only DB2 UDB for Linux, UNIX, and Windows is supported
as a cache database.
- On Linux operating systems, the thesaurus option for the semantic name
algorithm using Wordnet and Sureword is not supported. The thesaurus option
using a glossary model is supported.
- User Defined Types (UDTs) are not sampled when you discover relationships.
- In
the Mapping Editor preferences, when you set the preferences for discovering
relationships, the Algorithms page contains a selection for how to order multiple
algorithms. You can specify the Composition by sequence or Composition
by weight. When you select Composition by weight,
this assigns a weighting value to each algorithm. Currently the option only
applies to algorithms that return a single value.
- The SQL/XML query generation ignores the actual value of "x" of an XSD attribute
maxOccurs="x"
' if x
is a number greater or equal to 1. The generated query will create XML elements for all rows from a source column. It will not limit the amount of selected rows to the number defined under maxOccurs
. This is due to the incapability of SQL2003-conforming SQL/XML queries to express this requirement. For elements that are defined with the attribute maxOccurs="0"
, the mapping editor prohibits a mapping. Therefore, elements defined as
maxOccurs="0"
will not appear in the result.
- When UDTs are present on the target side, the generated scripts might not parse due to null value handling for UDTs.
- In the following two scenarios, not all of the artifacts are generated
in the DDL script, and the script cannot be deployed without modification:
- You are mapping from source table T1 in A.dbm to target table T2 in B.dbm,
and neither A nor B are federated to an Information Integrator server.
In this case, the only deployment platform available will be A.dbm and
only an insert script is generated. No table object T2 for A.dbm is generated,
even though this is necessary for the script to run. If you want to run
the script, you must create the table.
- You are mapping from source table T1 in A.dbm to target table T2 in B.dbm,
and both A.dbm and B.dbm are federated to an Information Integrator server.
In this case, the Information Integrator sever is available as a deployment
option. However, if you select the Information Integrator server, only
the nicknames for T1 and the insert script is generated. You must generate
the nicknames for T2 from B.dbm onto the Information Integrator server
before the script will run properly.
- Federation support
-
- You can generate DDL scripts for the federated server from the Database
Explorer . After you generate a script, you can deploy to like servers
on DB2 Universal Database for Linux, UNIX, and Windows, DB2 Universal Database
for iSeries, Oracle, SQL Server, Teradata, web services, XML, and Sybase.
To deploy the DDL scripts on any other data source, you must deploy them
using the DB2 command line (run them as DB2 scripts). When you deploy,
you might get a message saying that the wrapper already exists. If you
see this error, then disconnect the database connection and reconnect.
- After you create a federated server in the Database Explorer, the newly created server will not automatically be displayed in the Defined Server folder. You must refresh the folder to see the new server.
- Object name character limitations
-
Do not create an object that has quotation marks in the name. An object name delimited with quotation marks does not work. The following examples are not currently supported:
"""PROCEDURE"""
"""TABLE"""
"""SCHEMA"""."""PROCEDURE"""
- The mapping editor does not support the slash ("/") character in object names. The following example is not supported in the mapping editor:
DBM/NAME
- ClearCase
-
- RequisitePro
-
- All of the menus associated with RequisitePro integration appear in English
only.
- Glossary modeling
-
- You cannot access the naming content assist icon in the Properties view by using the keyboard. To work around this issue, click Window > Preferences > Data > Naming Standard to view the naming standard patterns.
- Screen readers cannot read the content assist window in the Properties
view. To work around this issue, you can open the glossary model that is
associated with the current project in the glossary model editor to read
the entries.
- SQL Tools
-
- The SQL Editor does not currently support host variables during the Run
SQL action. To work around this issue, you can run the SQL from the SQL builder,
if it is a DML statement.
- If you modify a statement in the SQL source area of the SQL builder and then you save the statement while it is invalid, the current text is not saved. Instead, the text that was in the SQL source area before modifications were made to the SQL source area is saved. If you attempt to run the invalid statement from the SQL builder, the last valid statement is run instead.
- In the SQL builder, the product does not draw lines in the graphical tables for conditions that are specified in the WHERE clause that represent a join.
- The full SQL syntax is not supported. For example, User Defined Types (UDTs)
and Table functions are not supported.
- XML
-
- In order to use XML data types and work with XML schemas, you must connect
to a UTF -8 database.
- The amount of data returned from the database for XML documents is unlimited. Depending on the amount of data that you return, performance might be affected.
- If you define a table that contains XML data, but does not include a primary key, updating the XML column will fail in the table editor. You must add a primary key or unique index to the table that contains the XML data.
- Working with multiple root elements in the Annotated XSD mapping editor
can lead to errors when you save the annotated XSD file. To work around
this issue, create a separate set of XML Schema document files for each
root element.
- Routine development
-
- User-defined types (UDTs) are not supported as parameters for routines.
- When deploying a stored procedure or a user-defined function using the
Ant deployment feature, the following message might appear if you do not
have the tools.jar file located in your classpath:
Unable to locate tools.jar. Expected to find it in F:\jre\1.4.2\lib\tools.jar
. Ignore this message. tools.jar is part of the Java Runtime Environment
(JRE), not part of the Ant deployment feature.
- To deploy Java stored procedures that target DB2 UDB for iSeries from the file
system by using Ant deploy, you must ensure that you have the jt400.jar in your
system classpath.
- When you create a Java stored procedure and change the method name, right-clicking in the editor and clicking Save does not work. To save the updated stored procedure, click File > Save.
- If you attempt to deploy an exported stored procedure by using the instructions in DeployInstructions.txt, you might get an error message that says :
...[createsp] Could not connect to the target database. [createsp]
com.ibm.db2.jcc.DB2Driver...
To work around this issue, ensure that db2jcc.jar and the appropriate license files are in your system classpath.
- Before you delete a data development project, close the open routines and
SQL editors that belong to the project. If you do not close the open routines
and SQL editors, the project and its contents will still be deleted, but
you will see error messages.
- You might see a
cannot load class
error when you deploy or run Java stored procedures. This can happen if there is a mismatch in JDK version between RAD v7 and the DB2 server, if the DB2 server is on a down-level JDK. To prevent this error, you should specify the "-source 1.4" option in the Compile
options field of the Deploy Routines wizard when you are deploying
Java stored procedures against servers that use a JDK level of 1.4. (for
example, a DB2 Universal Database for Linux, UNIX, and Windows V8.2 server). In
general, use the appropriate compilation option "-source JDK level " to match
the JDK level on the database server.
- If you drag and drop a stored procedure or UDF between unlike servers (for
example, from a DB2 UDB for Linux, UNIX, and Windows server to a DB2 UDB for
z/OS server), you will see a warning during the drag and drop operation about
certain incompatibilities between the two servers. If you continue with the
operation and then try to open the stored procedure or UDF, you might see an
error.
- Running SQL Profiling against a DB2 UDB for Linux, UNIX, and Windows V8.2 server
may cause a null pointer exception if the server is missing the prerequisite
stored procedure (SYSIBM.SQLCAMESSAGECCSID) that is required by the JCC driver
to retrieve error message text. To work around this issue, you can create a
connection to the server without the retrieveMessagesFromServerOnGetMessage=true
setting.
- During monitoring of the execution of SQL procedures, profiling events
are generated for DML statements such as INSERT, SELECT, DELETE, and UPDATE
that are issued in the procedure. However, events are not generated in
a deterministic fashion for procedural statements for variable assignments
and control structures such as WHILE or IF. Therefore, tuning data will
not be captured for these procedural statements.
- Stored procedure debugger
-
- When you are connected to a UNIX DB2 server, timeout exceptions can occur when
you are adding breakpoints or running in debug mode.
- The debugger does not run for a stored procedure whose name contains both
English and Chinese characters.
- Watch expressions are only supported for dynamic Java stored procedures. They
are not supported for SQL and SQLJ stored procedures.
- The debugger does not stop at a breakpoint if it is not positioned at the first
token of an executable statement, such as SET. In addition, it does not stop on
DECLARE CONTINUE, CLOSE CURSOR, or ROLLBACK.
- If you are debugging a Java stored procedure and you select a Terminate action,
it might take several minutes for the debug session to fully terminate. New
debug sessions that are started during this time may behave erratically.
- If you are debugging a Java stored procedure that calls a second Java stored
procedure, you cannot debug the second stored procedure. You cannot step into
the nested store procedure and any breakpoints that you set in the nested stored
procedure will be ignored. This restriction is for DB2 UDB for Linux, UNIX, and
Windows.
- If you get a
Timeout occurred while waiting for packet
error while you are debugging a Java stored procedure, try increasing the Java timeout setting. To increase the Java timeout setting, click Window >
Preferences from the workbench menu bar. Expand the
Java node and click Debug. On the Debug
preferences page, increase the Debugger timeout(ms) value in
the Communication timeout section. It is recommended that you
at least double the default value.
- When you are debugging a Java stored procedure, if you use the Change
Value action to modify a variable that has an empty string value, the
OK button in the edit dialog might not become enabled. To
enable the button, select the Input an evaluation radio button,
set the value to a non-empty string (for example, 'a'), and then select the
Input literal text radio button. The OK button
will then be available.
- If you do not see local variables when you are debugging a Java stored
procedure, the stored procedure might have been deployed without the -g compiler
option. Ensure that you specify the -g compiler option when you deploy Java
stored procedures.
- If you see an 'invalid stack frame' message in the Variables view, go to the
Debug view and click on the thread object above the stack frame and then click
on the stack frame. This should refresh the Variables view and the error should
no longer appear.
- When you are debugging an SQLJ stored procedure that is running on DB2 UDB for
iSeries V5 R4, the current line that is being executed will not correspond to
the indicated SQLJ source line displayed in the Debug view unless you have
applied an iSeries PTF that updates the linemap to correspond to the SQLJ source
instead of the Java source.
- Debugger preferences for session manager timeout are not recognized. These
preferences are set as follows: Click Window >
Preferences, expand the Run/Debug node, and click
DB2 Stored Procedure Debugger. Modify the Session
manager timeout in minutes field.
- The debugger cannot process a stored procedure that has large number of
variables on DB2 for Linux, UNIX, and Windows. The maximum number of variables
is 200.
- Cursor movement in a debug session: In some cases, when there is more than one
variable declaration in a procedure, you must click Step Into
or Step Over more than once in order to move to the next line.
For example, you must click twice on this line: DECLARE v_dept, v_actdept
CHAR(3); and three times on this line: DECLARE v_bonus, v_deptbonus, v_newbonus
DECIMAL(9,2); You must click a number of times equal to the number of variable
declarations.
- If you start a debug session for a Java stored procedure and add breakpoints,
then disable the breakpoints, the breakpoints are still enabled. To work around
this issue, when you start a new debug session, you should first remove all of
the old breakpoints and then add new breakpoints.
- In some cases when you are working with multiple data development projects, you
might see an error when you attempt to debug a stored procedure that says
"Unable to locate stored procedure PROCNAME. Procedure may have been deleted
from workspace" or "Source not found".
- If you are debugging a SQL stored procedure right after you terminate a debug
session of a Java stored procedure, the debugger might shows "User defined
function ... has been interrupted by the user." To work around this issue, try
debugging the SQL stored procedure again.
- Table data editor
-
- If you define a table with a single column of XML data type, or any table
with non-unique rows, and then use the table editor to delete a row, all
rows that match the selected row are deleted. To work around this issue,
do not use the table data editor to delete a row in a table with duplicate
rows.
- In the table data editor, if you perform an XML validation on a XML table that
does not have a primary key, the XML validation will only work the first time,
when you insert the XML value. In addition, an update of an existing XML column
with XML validation will fail. To work around this issue, create a primary key
for tables that contain XML columns.
Back to the
table of contents.
This section describes which data sources and data objects are supported
in Rational Data Architect.
- DB2 Universal Database for Linux, UNIX, and Windows, Enterprise Edition
and Workgroup Edition
- Version 8.1
- Version 8.2
- Version 9.1
- DB2 Universal Database for iSeries
- Version 5 Release 2
- Version 5 Release 3
- Version 5 Release 4
- DB2 Universal Database for z/OS
- Derby
- Version 10.0
- Version 10.1
- Informix Dynamic Server
- Version 9.2
- Version 9.3
- Version 9.4
- Version 10.0
- Microsoft SQL Server Enterprise
- MySQL
- Oracle 8i
- Oracle 8i Enterprise Edition
- Oracle Enterprise Edition 9i
- Oracle 10g
- Sybase Adaptive Server Enterprise
- Version 12.0
- Version 12.5
- Version 15
Back to the
table of contents.
2.2 ClearCase support
Rational Data Architect
supports the IBM Rational ClearCase Remote Client adapter and IBM Rational
ClearCase LT, which provides services for development teams to work with resources
in a shared repository.
For information on installing ClearCase LT, see the technical note called "Acquiring ClearCase LT as part of the Software Development Platform". To find this technical note, go to http://www.ibm.com and enter 1188585
in the Search box. For information on installing the ClearCase Remote Client Adapter, install Rational Data Architect, select Help->Help Contents from the tool bar, and search for the topic named Support
for sharing data projects in Rational Data Architect.
Back to the
table of contents.
The following two tables describe the objects that Rational Data Architect
supports. "Yes" indicates that the support is available. "No"
indicates that some or all of the function is not available. "N/A
for this data source" indicates that the data source does not support
that object.
Table 1. Creating modelsObject |
Universal Database |
zSeries |
iSeries |
Derby |
Oracle |
SQL Server |
Sybase |
Informix |
Table |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Table partition key |
Yes |
Yes |
No |
No |
No |
No |
No |
No |
View |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Alias |
Yes |
Yes |
Yes |
No |
No |
No |
No |
No |
Materialized query table |
Yes |
Yes |
No |
No |
Yes |
No |
No |
No |
Nickname |
Yes |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
User defined type - distinct |
Yes |
Yes |
Yes |
No |
No |
No |
No |
No |
User defined type - structured |
Yes |
No |
No |
No |
Yes |
No |
No |
No |
Sequence |
Yes |
Yes |
No |
No |
Yes |
No |
No |
No |
Procedure |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
User defined function |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Method |
No |
No |
No |
No |
No |
No |
No |
No |
RoutineResultTable |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
RoutineResultTable parameter |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Remote server |
Yes |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
Storage |
Yes |
Yes |
No |
No |
Yes |
No |
No |
No |
Range partition |
Yes |
No |
No |
No |
No |
No |
No |
No |
Table 2. Creating models by using reverse engineeringObject |
Universal Database |
zSeries |
iSeries |
Derby |
Oracle |
SQL Server |
Sybase |
Informix |
Schema |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Table |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
View |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Alias |
Yes |
Yes |
Yes |
No |
Yes |
No |
No |
No |
Materialized query table |
Yes |
Yes |
No |
No |
Yes |
No |
No |
No |
Nickname |
Yes |
N/A for this data source** |
N/A for this data source** |
N/A for this data source** |
N/A for this data source** |
N/A for this data source** |
N/A for this data source** |
N/A for this data source** |
User defined type - distinct |
Yes |
Yes |
Yes |
No |
No |
No |
Yes |
Yes |
User defined type - structured |
Yes |
No |
No |
No |
Yes |
No |
No |
No |
Sequence |
Yes |
Yes |
Yes |
No |
Yes |
No |
No |
Yes |
Procedure |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
User defined function |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Method |
No |
No |
No |
No |
No |
No |
No |
No |
RoutineResultTable |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
No |
Yes |
RoutineResultTable parameter |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
No |
Dependency constraint |
Yes |
Yes |
Yes |
No |
No |
No |
No |
Yes |
Dependency trigger |
Yes |
No |
Yes |
No |
Yes |
Yes |
Yes |
No |
Dependency routine |
Yes |
Yes |
Yes |
No |
Yes |
Yes |
Yes |
No |
Dependency view |
Yes |
Yes |
Yes |
No |
Yes |
Yes |
Yes |
Yes |
Dependency materialized query table |
Yes |
Yes |
No |
No |
Yes |
No |
No |
No |
Dependency sequence |
No |
Yes |
No |
No |
Yes |
No |
No |
No |
Storage partitioning group |
Yes |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
Storage group |
N/A for this data source |
Yes |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
Storage partition |
Yes |
Yes |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
Storage table space |
Yes |
Yes |
No |
N/A for this data source |
Yes |
No |
No |
No |
Storage table space container/volume |
Yes |
Yes |
No |
N/A for this data source |
Yes |
No |
No |
No |
Storage table space relationship with table |
Yes |
Yes |
No |
N/A for this data source |
Yes |
No |
No |
No |
Storage table space relationship with materialized query
table |
Yes |
Yes |
No |
N/A for this data source |
Yes (materialized view) |
No |
No |
No |
Storage buffer pool |
Yes |
Yes |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
N/A for this data source |
Storage partitioning key |
Yes |
Yes |
No |
N/A for this data source |
Yes |
No |
No |
No |
Refresh |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Filter |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
** You can use WebSphere Information Integrator to reverse engineer metadata
from all of the relational data sources that WebSphere Information Integrator
supports.
3.0 Notices and trademarks
This information was developed for products
and services offered in the U.S.A. IBM may not offer the products, services,
or features discussed in this document in other countries. Consult your local
IBM representative for information on the products and services currently
available in your area. Any reference to an IBM product, program, or service
is not intended to state or imply that only that IBM product, program, or
service may be used. Any functionally equivalent product, program, or service
that does not infringe any IBM intellectual property right may be used instead.
However, it is the user's responsibility to evaluate and verify the operation
of any non-IBM product, program, or service.
IBM may have patents or
pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents.
You can send license inquiries, in writing, to: IBM Director of Licensing
IBM Corporation 500 Columbus Avenue Thornwood, NY 10594 U.S.A.
The
following paragraph does not apply to the United Kingdom or any other country
where such provisions are inconsistent with local law:
INTERNATIONAL
BUSINESS MACHINES CORPORATION PROVIDES THIS PUBliCATION "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESS OR IMPliED, INCLUDING, BUT NOT liMITED TO, THE
IMPliED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABIliTY OR FITNESS FOR A PARTICulAR
PURPOSE. Some states do not allow disclaimer of express or implied warranties
in certain transactions, therefore, this statement may not apply to you.
This
information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will
be incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this publication
at any time without notice.
Any references in this publication to non-IBM
Web sites are provided for convenience only and do not in any manner serve
as an endorsement of those Web sites. The materials at those Web sites are
not part of the materials for this IBM product and use of those Web sites
is as your own risk.
Licensees of this program who wish to have information
about it for the purpose of enabling: (i) the exchange of information between
independently created programs and other programs (including this one) and
(ii) the mutual use of the information which has been exchanged, should contact:
IBM Corporation
J46A/G4
555 Bailey Avenue
San Jose, CA 95141-1003
U.S.A.
Such information may be available, subject to appropriate
terms and conditions, including in some cases, payment of a fee.
The
licensed program described in this information and all licensed material available
for it are provided by IBM under terms of the IBM Customer Agreement, IBM
International Program License Agreement, or any equivalent agreement between
us.
Information concerning non-IBM products was obtained from the suppliers
of those products, their published announcements or other publicly available
sources. IBM has not tested those products and cannot confirm the accuracy
of performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.
All statements regarding IBM's future
direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples
of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals,
companies, brands, and products. All of these names are fictitious and any
similarity to the names and addresses used by an actual business enterprise
is entirely coincidental.
3.2 Trademarks and service marks
IBM,
Cloudscape, Rational, DB2 Universal Database, and zSeries are trademarks or
registered trademarks of the IBM corporation in the United States, other countries,
or both.
Java and all Java-based trademarks are trademarks of Sun Microsystems,
Inc. in the United States, other countries, or both.
Linux is a copyright
of Linus Torvalds in the United States, other countries, or both.
Microsoft,
Windows NT, Windows 2000, and Windows XP are trademarks of Microsoft Corporation
in the United States, other countries, or both.
Other company, product
or service names may be the trademarks or service marks of others.
US Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.