+

Search Tips   |   Advanced Search

Migrate cells using the command-line tools

Note: For a GUI interface to the CLI tools, use the Configuration Migration Tool

Review the migration planning information at Knowledge Collection: Migration planning for WAS.

Tip: Rather than specifying individual parameters on migration commands, we can specify the -properties file_name.properties parameter to input a properties file.

This scenario covers migrating cells on the same host. If we intend to migrate cells to a different host, see Migrate cells to new host machines using the command-line tool.

Use the command-line tools to migrate a cell from a previous version of WAS to v9.0. The cell configuration consists of a deployment manager with one or more nodes, a web server, and an application client. All ports are migrated forward into the new configuration. This procedure assumes that the previous configuration is running.

Ensure that your setting for the maximum number of open files is 10000 or greater. If the number of open files is too low, this can cause various migration failures.

For transitioning users: The following products previously required separate migration tools but are now migrated as part of the standard migration procedures:

For more information about these changes, see What is new for migration.trns


Tasks

  1. Back up the deployment manager and all old nodes.

    In case of failure during the migration, save the current deployment manager and node configuration to a file we can use later for recovery purposes using the backupConfig command. See backupConfig command.

    1. Change to the deployment_manager_profile_root/bin directory.
    2. Run the backupConfig command with the appropriate parameters and save the current profile configuration to a file. For example:

        /opt/WebSphereV70/profiles/v70dmgr01/bin/backupConfig.sh /mybackupdir/v70dmgr01backupBeforeV90migration.zip -username myuser -password mypass -nostop

    3. For each node in the configuration, change to the node_profile_root/bin directory.

    4. Run the backupConfig command with the appropriate parameters, and save the current profile configuration to a file. For example:

        /opt/WebSphereV70/profiles/v70node01/bin/backupConfig.sh /mybackupdir/v70node01backupBeforeV90migration.zip -username myuser -password mypass -nostop

  2. Install WAS v9.0 onto each target machine in a new directory.

    See installation documentation.

  3. Create the target deployment manager profile by running the manageprofiles command with the appropriate parameters.

    The target deployment manager profile is a new deployment manager profile that will be the target of the migration.

    The v9.0 profile nodeName and cellName must match the previous v7.0 or later nodeName and cellName. If the v9.0 deployment manager cellName or nodeName are different, the migration will fail.. For example:

      /opt/WebSphereV90/bin/manageprofiles.sh -create -profileName v90dmgr01 -templatePath /opt/WebSphereV90/profileTemplates/management -serverType DEPLOYMENT_MANAGER -nodeName currentDmgrNodeName -cellName currentCellName -hostName mydmgrhost.company.com

  4. Save the current deployment manager configuration to the migration backup directory by running the WASPreUpgrade command from the new deployment manager profile bin directory.

    The WASPreUpgrade command does not change the v7.0 or later configuration. See WASPreUpgrade command.

    If we are migrating from v8.0 or later to v9.0 and your profile is a deployment manager, the Version 8.0 profile is stopped when running the WASPreUpgrade command. The deployment manager is only started before WASPreUpgrade completes if we provide -keepDmgrEnabled true on the command line or specify the corresponding option in the migration wizard.

    1. Run the WASPreUpgrade command, specifying the migration backup directory, the v7.0 or later installation root directory, and the deployment manager profile name. For example:

        /opt/WebSphereV90/bin/WASPreUpgrade.sh /mybackup/v70toV90dmgr01 /opt/WebSphereV70 -oldProfile v70dmgr01

    2. Review warnings or errors in the console output and WASPreUpgrade logs.

      After the WASPreUpgrade command is complete, check the console output for Failed with errors or Completed with warnings messages. Then, check the WASPreUpgrade.old_Profile.timestamp.log and WASPreUpgrade.trace log files for any warnings or errors.

      If there are errors, fix the errors and run the WASPreUpgrade command again. Check whether the warnings affect any other migration or runtime activities on v9.0.

      If the command completed successfully, it is not necessary to check the logs for errors or warnings.

  5. Restore the previous deployment manager configuration that you saved in the migration backup directory by running the WASPostUpgrade command.

    If we use the options that are shown in the following example, all ports are carried forward, the old deployment manager is shut down and disabled, and all applications are installed.

    See WASPostUpgrade command.

    1. Run the WASPostUpgrade command. For example:

        /opt/WebSphereV90/bin/ WASPostUpgrade.sh /mybackup/v70toV90dmgr01 -oldProfile v70dmgr01 -profileName v90dmgr01 -resolvePortConflicts incrementCurrent -backupConfig TRUE -includeApps TRUE -keepDmgrEnabled FALSE -username myuser -password mypass

      When we create profiles, only one profile is considered the default profile per installation.

      We can identify the default profiles by looking in the profileRegistry.xml file in the WAS_HOME/properties directory. The source profileRegistry.xml is copied to the migration backup directory as part of the WASPreUpgrade command.

      To continue to use the old profile after it is migrated, specify the -clone TRUE parameter. If we specify a clone migration for the deployment manager, we must also clone all of its federated nodes. Specifying a clone migration automatically sets -keepDmgrEnabled to true.

      Always specify the -oldProfile and -profileName parameters when running the WASPostUpgrade command.

    2. Review warnings or errors in the console output and WASPostUpgrade logs. After the WASPostUpgrade command is complete, check the console output for Failed with errors or Completed with warnings messages. Then, check the migration_backup_dir/logs/WASPostUpgrade.target_profile.timestamp.log and migration_backup_dir/logs/WASPostUpgrade.target_profile.trace log files for any warnings or errors. If there are errors, fix the errors and run the WASPostUpgrade command again. Check whether the warnings affect any other migration or runtime activities on v9.0.

      If the configuration was migrated correctly but any applications were not installed, we can run the WASMigrationAppInstaller command to install only the applications that were not migrated. See WASMigrationAppInstaller command.

      If the command completed successfully, it is not necessary to check the logs for errors or warnings.

  6. Back up the v9.0 deployment manager configuration to a file by running the backupConfig command on the v9.0 deployment manager.

    This is an important step in the cell migration plan. If there are any node migration failures, we can restore the cell configuration to the point before the failure, apply remedial actions, and attempt the node migration again.

    1. Change to the deployment_manager_profile_root/bin directory

    2. Run the backupConfig command with the appropriate parameters. For example:

        /opt/WebSphereV90/profiles/v70toV90dmgr01/bin/backupConfig.sh /mybackupdir/v70toV90dmgr01backupMigratedDmgrOnly.zip -username myuser -password mypass

  7. Start the v9.0 deployment manager.

    Ensure that the previous version of the deployment manager is not running.

    1. Change to the new v9.0 deployment manager profile bin directory.

    2. Run the startManager command.
    3. While the deployment manager is running, check the SystemOut.log file for warnings or errors.

      IBM recommends using the High Performance Extensible Logging (HPEL) log and trace infrastructure . We view HPEL log and trace information using the logViewer .

    4. Check all of the node's node agent and application server logs for new warnings or errors. If automatic synchronization is enabled, allow the node to synchronize, allow the applications to restart, and then check the logs for new warnings or errors.

  8. For Compute Grid or Feature Pack for Modern Batch, verify that the job scheduler was migrated correctly and that we can dispatch jobs to the previous version servers that host your batch applications.

    To verify the job scheduler migration, after the deployment manager restarts, access the job management console through a web browser.

    To verify that the previous version servers that host your batch applications work correctly:

    1. Verify that the batch applications on the migrated server or cluster are started. Examine the server or cluster logs for any errors.

    2. Verify that we can dispatch batch jobs to the migrated server by submitting a job from the migrated job scheduler server. We can submit the job using the Job Management Console, the WSGrid utility, the EJB interface, or the web services interface.

  9. Migrate application client installations.

    Migrate client resources to v9.0-level resources.

    1. Install the WebSphere v9.0 application client.

      See installation documentation.

    2. Run the v9.0 WASPreUpgrade command to save the application client security settings to a migration backup directory. For example:

        /opt/AppClientV90/bin/WASPreUpgrade.sh /mybackup/v70clientToV90 /opt/AppClientV70

    3. Run the v9.0 WASPostUpgrade command to restore the application client security settings to the new v9.0 client. For example:

  10. Migrate nodes.

    Use the migration tools to migrate the previous versions of the nodes in the configuration to v9.0. Perform the following procedure for each node that we plan to migrate to v9.0.

    We must use the same source node name but a different temporary cell name for each node that we migrate to v9.0..

    1. Ensure that the v9.0 deployment manager is running.

    2. Create the target node profile. Run the manageprofiles command with the appropriate parameters to create a new managed profile. For example:

        /opt/WebSphereV90/manageprofiles.sh -create -profileName node1 -templatePath /opt/WebSphereV90/profileTemplates/managed -nodeName currentNode1Name -cellName tempCellName -hostName mynode1host.company.com

    3. Run the WASPreUpgrade command to save the current node configuration information to a migration backup directory. Specify a new directory for the backup files, the installation root directory, and the node profile. For example:

        /opt/WebSphereV90/bin/WASPreUpgrade.sh /mybackup/v70toV90node1 /opt/WebSphereV70 -oldProfile 70node1

    4. Review warnings or errors in the console output and WASPreUpgrade logs.

      Check the WASPreUpgrade console output for the following messages: Failed with errors or Completed with warnings.

      Look in the following logs for warnings or errors:

      • migration_backup_dir/logs/WASPreUpgrade.old_profile.timestamp.log
      • migration_backup_dir/logs/WASPreUpgrade.trace

      If the WASPreUpgrade command completed with Success, then checking the logs for errors or warnings is not necessary.

    5. Stop the node agent. If we have v7.0 or later nodes running during a migration to v9.0, we must stop the node agent on the node being used migrated. If we do not stop the node agent, we might encounter corruption problems.

    6. Run the WASPostUpgrade command to restore the saved node configuration into the new v9.0 managed profile. For example:

        /opt/WebSphereV90/bin/ WASPostUpgrade.sh /mybackup/v70toV90node1 -profileName currentNode1Name -oldProfile 70node1 -resolvePortConflicts incrementCurrent -backupConfig TRUE -username myuser -password mypass

      If we cloned the deployment manager, we must also clone all federated nodes. Specify the -clone TRUE parameter and the new deployment manager host name and SOAP or RMI port. Do not clone federated nodes unless the deployment manager was also cloned.

        /opt/WebSphereV90/bin/ WASPostUpgrade.sh /mybackup/v70toV90node1 -profileName currentNode1Name -oldProfile 70node1 -resolvePortConflicts incrementCurrent -backupConfig TRUE -username myuser -password mypass -clone TRUE -newDmgrHostName myV90DmgrHost.mycompany.com -newDmgrSoapPort 8879

    7. Review warnings or errors in the console output and WASPostUpgrade logs.

      Check the WASPostUpgrade console output for the messages Failed with errors or Completed with warnings.

      Look in the following logs for errors or warnings:

      • migration_backup_dir/logs/WASPostUpgrade.target_profile.timestamp.log
      • migration_backup_dir/logs/WASPostUpgrade.target_profile.trace

      If the WASPostUpgrade command fails, we might have to restore the v9.0 deployment manager from the backupConfig file. If the WASPostUpgrade processing ran the syncNode command, the deployment manager is aware that the node was migrated. The node cannot be migrated again until the deployment manager is restored to the state before the node migration.

      If the configuration was migrated correctly but any applications were not installed, we can run the WASMigrationAppInstaller command to install only the applications that were not migrated. See WASMigrationAppInstaller command.

      If the command completed with Success, then checking the logs for errors or warnings is not necessary.

    8. Check the v9.0 deployment manager SystemOut.log file for warnings or errors.

      IBM recommends using the High Performance Extensible Logging (HPEL) log and trace infrastructure . We view HPEL log and trace information using the logViewer .

    9. Start the migrated v9.0 node agent.
    10. Check the v9.0 deployment manager and node SystemOut.log file for warnings or errors.
    11. Synchronize the cell.

    12. Stop all the application servers on the v9.0 migrated node.

    13. Start the appropriate application servers on the v9.0 migrated node.

    14. For Compute Grid or Feature Pack for Modern Batch, verify that the job scheduler was migrated correctly and that we can dispatch jobs to the migrated servers that host your batch applications.

      To verify the job scheduler migration, after the migrated application servers or clusters restart, access the job management console through a web browser.

      To verify that the v9.0 servers that host your batch applications work correctly:

      1. Verify that the batch applications on the migrated server or cluster are started. Examine the server or cluster logs for any errors.

      2. Verify that we can dispatch batch jobs to the migrated server by submitting a job from the migrated job scheduler server. We can submit the job using the Job Management Console, the WSGrid utility, the EJB interface, or the web services interface.

    15. Save the v9.0 profile configuration to a file by running the backupConfig command with the appropriate parameters. For example:

        /opt/WebSphereV90/profiles/v70toV90node1/bin/backupConfig.sh /mybackupdir/ v70toV90node1.zip -username myuser -password mypass -nostop

      Each time that we run the backupConfig command, use a new backup file name.

    16. Save the deployment manager configuration to a file by running the backupConfig command with the appropriate parameters. Before we run the command, change to the deployment_manager_profile_root/bin directory on the v9.0 deployment manager host.

      For each node migrated, back up the v9.0 deployment manager configuration to a new backup file. For example:

        /opt/WebSphereV90/profiles/v70toV90dmgr01/bin/backupConfig.sh /mybackupdir/v70toV90dmgr01backupMigratedDmgrPlusNodeX.zip -username myuser -password mypass

      If we are migrating a node to a different host, refer to the information about migrating nodes in Migrate cells to new host machines using the command-line tool.

  11. Migrate plug-ins for web servers.

    The product supports several different web servers, as described in the system requirements. For installation information, see the documentation for your web server type and version.

    1. Ensure that the v9.0 deployment manager is running.
    2. Update the version of the web server plug-in used in the cell.

    3. For all application servers in the cell to be served by the web server, create a new web server definition for web server plug-in. For more information about creating web server definitions, see Implementing a web server plug-in.

We migrated from a previous version to WAS v9.0 using the migration tools.


  • Migrate cells to new host machines using the command-line tool
  • Migration Toolkit on WASdev.

  • WASPreUpgrade command
  • WASPostUpgrade command
  • WASMigrationAppInstaller command
  • manageprofiles command
  • backupConfig command
  • restoreConfig command