You might encounter problems while migrating from an older version of WebSphere® Application Server.
This article is about configuration migration, such as migrating deployment managers and federated nodes in a network deployment environment. The Application Migration Toolkit for WebSphere Application Server provides support for migrating applications from previous versions of WebSphere Application Server to the latest product version. For information about migrating applications, read more about the Application Migration Toolkit.
sptcfgThis indicates that a configuration error was detected before beginning the migration process. This can be due to either incorrect data entered when you created the migration jobs or a configuration problem. Review the log output for the error detected, then correct and rerun. The logs are located in temporary_directory_location/nnnnn, where temporary_directory_location is the value that you specified when you created the migration jobs (where the default is /tmp/migrate) and nnnnn is a unique number that is generated and displayed during the creation of your migration jobs as well as displayed in the JESOUT DDNAME of the WROUT and WRERR steps of your batch job stream.
In the event of failure in the migration job after the Verify step, you can rerun the migration job; but first, you must delete the WebSphere Application Server for z/OS® configuration home directory created in the CRHOME step. This corresponds to the home directory that you entered when you created the migration jobs, and it can also be found in the migration Job Control Language (JCL) environment variable V6_HomeDir. Because the migration procedure creates a new configuration file system for each node being migrated, it is a simple process to delete the configuration and start from scratch.
A federated node is the most complex node to migrate because it is essentially two migrations rolled into one. A federated node requires a migration of the node configuration information contained in the deployment manager's primary repository as well as the configuration information contained in the federated node. Federated node migration requires an active connection to the deployment manager. If you have security enabled, it is essential that you follow the instructions that were generated when you created the migration jobs. The migration job must be submitted with a WebSphere Administrator's user ID that has been properly configured for obtaining secure connections.
If you select the option for the migration process to install the enterprise applications that exist in the Version 6.1 or above configuration into the new Version 8.5 configuration, you might encounter error messages during the application-installation phase of migration.
The applications that exist in the Version 6.1 or above configuration might have incorrect deployment information—typically, invalid XML documents that were not validated sufficiently in previous WebSphere Application Server runtimes. The runtime now has an improved application-installation validation process and will fail to install these malformed EAR files. This results in a failure during the application-installation phase of WASPostUpgrade and produces an "E" error message. This is considered a "fatal" migration error.
Do this by adding the RESTART=FINISHUP parameter to the job card and resubmitting the job.
The migration logs are located in temporary_directory_location/nnnnn, where temporary_directory_location is the value that you specified when you created the migration jobs (where the default is /tmp/migrate) and nnnnn is a unique number that was generated during the creation of your migration jobs. Normally, the space requirements for the migration logs are small. If you enable tracing, however, the log files can be quite large. The best practice is to enable tracing only after problems have been found. If tracing is required, try to only enable tracing related to the step in the process that is being debugged. This will help to reduce the space requirements.
You can enable tracing when you create the migration jobs using the z/OS Migration Management Tool or the zmmt command. To enable tracing with the zmmt command, set the following properties to a possible value in the response file:
Set zmbEnablePreUpgradeTrace and zmbEnablePostUpgradeTrace to a value between 0 for no tracing to 4 for all tracing. Set zmbEnableProfileTrace and zmbEnableScriptingTrace to either 0 for no tracing or 1 to enable tracing.
During migration, a backup copy of your Version 6.1 or above configuration is made. This backup becomes the source of the information being migrated. The default backup location is /tmp/migrate/nnnnn. This location can be changed when you create the migration jobs. Depending on the size of the node being migrated, this backup can be quite large. If your temporary space is inadequate, then you will need to relocate this backup.
BPXBATCH SH + export _BPX_SHAREAS=NO; +
export IBM_JAVA_OPTIONS="-Xms256M -Xmx768M"; +
/wit/bigtmp/bbomigrt2.sh WASPreUpgrade +
/wit/bigtmp/24173105/_ +
1>> /wit/bigtmp/24173105/BBOWMG3D.out +
2>> /wit/bigtmp/24173105/BBOWMG3D.err;
If you are migrating from a system where you have access to the read-only driver file system, edit the WASPreUpgrade.sh and WASPostUpgrade.sh scripts in the bin directory.
set PERFJAVAOPTION=-Xms256M -Xmx768M
You can now continue your migration. If you decided to run the three individual jobs, launch the BBOWMPRE job and after that is successful (RC=0) run the BBOWMPOS job. If you edited the read-only file-system copy of the migration script files, you can run the appropriate BBOWMG3* job.
Each z/OS installation is different with respect to job classes and time limitations. Make sure you have specified appropriate job classes and timeout values on your job card.
MIGR0339I: Application WISO_wisoadmin_war.ear is deploying using the wsadmin command.
MIGR0241I: Output of wsadmin.
Error: unable to allocate 268435456 bytes for GC in j9vmem_reserve_memory.
JVMJ9VM015W Initialization error for library j9gc23(2): Failed to instantiate heap. 256M requested
Could not create the Java virtual machine.
The problem is that the WASPostUpgrade script launched from bbomigrt2.sh does not have enough remaining address space to initialize the Java Virtual Machine (JVM). Typically, this indicates that the spawned process is running in the same address space as the WASPostUpgrade JVM.
You can use the environment variable _BPX_SHAREAS to tell the underlying process whether or not spawned processes should share the same address space as the parent process. The default value (null) is NO, but administrators can change this to YES or MUST to get a performance benefit because the address space does not need to be copied during fork or spawn actions.
export _BPX_SHAREAS = NO
After the migration job completes, you can update the profile to reset _BPX_SHAREAS to its original value.
Review the instructions that were generated when you created the migration jobs. Verify that the JCL procedures have been copied over correctly to your PROCLIB, the RACF® definitions have been created, and the Version 8.5 libraries have been authorized. Make sure that the daemon process associated with your cell is at the appropriate level. The daemon process must be at the highest WebSphere Application Server for z/OS version level of all servers that it manages within the cell.
Exception = java.lang.ClassNotFoundException
Source = com.ibm.ws.cluster.selection.SelectionAdvisor.<init>
probeid = 133
Stack Dump = java.lang.ClassNotFoundException: rule.local.server
at java.net.URLClassLoader.findClass(URLClassLoader.java(Compiled Code))
at com.ibm.ws.bootstrap.ExtClassLoader.findClass(ExtClassLoader.java:106)
at java.lang.ClassLoader.loadClass(ClassLoader.java(Compiled Code))
at java.lang.ClassLoader.loadClass(ClassLoader.java(Compiled Code))
at java.lang.Class.forName1(Native Method)
at java.lang.Class.forName(Class.java(Compiled Code))
at com.ibm.ws.cluster.selection.rule.RuleEtiquette.runRules(RuleEtiquette.java:154)
at com.ibm.ws.cluster.selection.SelectionAdvisor.handleNotification(SelectionAdvisor.java:153)
at com.ibm.websphere.cluster.topography.DescriptionFactory$Notifier.run(DescriptionFactory.java:257)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1462)
Exception = java.io.IOException
Source = com.ibm.ws.cluster.topography.DescriptionManagerA. update probeid = 362
Stack Dump = java.io.IOException
at com.ibm.ws.cluster.topography.ClusterDescriptionImpl.importFromStream(ClusterDescriptionImpl.java:916)
at com.ibm.ws.cluster.topography.DescriptionManagerA.update(DescriptionManagerA.java:360)
Caused by: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java(Compiled Code))
at java.io.DataInputStream.readUTF(DataInputStream.java(Compiled Code))
at com.ibm.ws.cluster.topography.KeyRepositoryImpl.importFromStream(KeyRepositoryImpl.java:193)
TCP Channel initialization failed. The socket bind failed for host and port 5060.
To resolve this problem, you can delete the transport chain, UDP_SIP_PROXY_CHAIN,
in the serverindex.xml file at the node level of the server where
the error occurred. gotchaAfter migration, carefully review the job output and log files for errors.
If you migrate a node to Version 8.5 then discover that you need to revert to Version 6.1 or above, read Rolling back environments.
For current information available from IBM® Support on known problems and their resolution, read the IBM Support page. IBM Support also has documents that can save you time gathering information needed to resolve this problem. Before opening a PMR, read the IBM Support page.
New ports that are registered on a migrated Version 8.5 node agent include: WC_defaulthost, WC_defaulthost_secure, WC_adminhost, WC_adminhost_secure SIB_ENDPOINT_ADDRESS, SIB_ENDPOINT_SECURE_ADDRESS ,SIB_MQ_ENDPOINT_ADDRESS, SIB_MQ_ENDPOINT_SECURE_ADDRESS. These ports are not needed by the node agent, and can be safely deleted.
If you did not find your problem listed, contact IBM support.