+

Search Tips   |   Advanced Search

Migrate cells to new host machines using the command-line tool

This task describes how to migrate each profile in a cell configuration from a previous version of WAS to WAS v9.0 hosted on a different machine. The cell configuration consists of a deployment manager with one or more nodes, a web server, and an application client. All ports are migrated forward into the new configuration. Before starting, the review WebSphere Migration Knowledge Collection.

Note that this procedure will NOT work to migrate WAS from one OS, such as AIX, to another OS, such as Linux. This procedure only works if source OS matches the target OS.


Tasks

  1. On all hosts, set maximum number of open files to 10000 or greater. If the number of open files is too low, this can cause a variety of migration failures.

  2. On the source hosts back up the deployment manager and all nodes

      cd /_dmgr_profile/bin
      backupConfig.sh /path/to/old_dmgr.zip -username wasadmin -password mypass -nostop
      cd _appserver_root/v70node01/bin
      backupConfig.sh /path/to/old_appserver.zip -username wasadmin -password mypass -nostop

  3. Install WAS ND v9.0 onto each target host in a new directory.

    Install on a system where the OS and architecture matches the source profile. Once we have generated the remote migration jar, it will work on any system which matches the operating system and architecture. The archive generated contains operating system specific code that only executes on this architecture.

  4. From the target host, create the remote migration .jar file.

    This file contains the files necessary to run the WASPreUpgrade command on a system which does not have WAS v9.0 installed.

    1. Create the remote migration .jar.

        cd $WAS_HOME/bin/migration/bin
        createRemoteMigrJar.bat(sh) -targetDir <dir_for_the_remote_migration_jar>

      This creates the following file:

        WAS_V90_OS.arch_RemoteMigrSupport.jar

      For example:

        WAS_V90_windows.amd64_RemoteMigrSupport.jar

    2. Send the .jar file to the system where your source profile resides.

    3. Extract the file to a temporary location.

    4. Change directories to the bin directory in the temporary location.

    We are now ready to run the WASPreUpgrade command against the source profile. However, do not issue this command until we are told to do so in a later step.

  5. On the target host, create the target deployment manager profile.

    The v9.0 cell and node names must match the cell and node names in the source configuration, otherwise the migration will fail.

      cd v9_install_root/bin/
      manageprofiles.sh -create \
                        -profileName v70toV90dmgr01 \
                        -templatePath /opt/WebSphereV90/profileTemplates/management \
                        -serverType DEPLOYMENT_MANAGER \
                        -nodeName currentDmgrNodeName \
                        -cellName currentCellName \
                        -hostName mydmgrhost.company.com
      

  6. Save the current deployment manager configuration to the migration backup directory.

    The WASPreUpgrade command does not change the old configuration.

    1. Run the WASPreUpgrade command with the -machineChange true parameter to save the current deployment manager configuration to a migration backup directory. For example:

        cd /path/to/remote/migration.jar/migration/bin/
        ./WASPreUpgrade.sh /mybackup_old_host/v70toV90dmgr01 /opt/WebSphereV70 -oldProfile 70dmgr01 -machineChange true

      where mybackup_old_host is the directory to which the profile configuration files are copied in preparation for the migration to the new host.

      If we are migrating from v8.0 to v9.0 and your profile is a deployment manager, v8.0 profile is stopped when WASPreUpgrade runs. The deployment manager is only started before WASPreUpgrade completes if we provide -keepDmgrEnabled true on the command line or specify the corresponding option in the Migration wizard.

      If we specify -machineChange true, we must update the job manager URL for all resources (such as other deployment managers or application servers) managed by the job manager function of the v8.0 deployment manager after the migration..

    2. Review warnings or errors in the console output and WASPreUpgrade logs. After the WASPreUpgrade command is complete, check the console output for Failed with errors or Completed with warnings messages. Then, check the following log files for any warnings or errors:

      • mybackup_old_host/v70toV90dmgr01/logs/WASPreMigrationSummary.log
      • mybackup_old_host/v70toV90dmgr01/logs/WASPreUpgrade.timestamp.log
      • mybackup_old_host/v70toV90dmgr01/logs/WASPreUpgrade.trace

      If there are errors, fix the errors and run the WASPreUpgrade command again. Check whether the warnings affect any other migration or runtime activities on v9.0.

      If the command completed successfully, it is not necessary to check the logs for errors or warnings.

  7. Archive the backup directory created by the WASPreUpgrade command.

    Do not use the Windows archive tool because it is not compatible with a WAS migration.

    1. Use the archive tool of our choice to create a compressed file of the backup directory. For example:

        cd /mybackup_old_host
        /opt/WebSphereV70/java/bin/jar -cf v70toV90dmgr01.jar v70toV90dmgr01/

    2. Move the archived file to the target machine.

    3. Create a directory on the target machine and extract the archived file to the new directory. For example:

        mkdir /mybackup_new_host
        cd /mybackup_new_host
        /opt/WebSphereV90/java/bin/jar -xf v70toV90dmgr01.jar

      where mybackup_new_host is the directory to which we are migrating the files.

  8. From the target host, restore the previous deployment manager configuration that we saved in the migration backup directory.

    If we use the options shown in the example, all ports are carried forward, and all applications are installed.

        cd v9_install_root/bin/
        WASPostUpgrade.sh /mybackup_new_host/v70toV90dmgr01 -profileName v70toV90dmgr01 -oldProfile 70dmgr01 -resolvePortConflicts incrementCurrent -backupConfig TRUE -keepDmgrEnabled TRUE -username wasadmin -password mypass

      where mybackup_new_host is the directory from which the source profile configuration files are migrated.

      To continue to use the old profile after it is migrated, specify the -clone TRUE parameter. If we specify a clone migration for the deployment manager, we must also clone all of its federated nodes.

    1. Review warnings or errors in the console output and WASPostUpgrade logs. After the WASPostUpgrade command is complete, check the console output for Failed with errors or Completed with warnings messages. Then, check the following log files for any warnings or errors:

      • mybackup_new_host/v70toV90dmgr01/logs/WASPostMigrationSummary.log
      • mybackup_new_host/v70toV90dmgr01/logs/WASPostUpgrade.target profile name.timestamp.log
      • mybackup_new_host/v70toV90dmgr01/logs/WASPostUpgrade.target profile name.trace

      If there are errors, fix the errors and run the WASPostUpgrade command again. Check whether the warnings affect any other migration or runtime activities on v9.0.

      If the configuration was migrated correctly but any applications were not installed, we can run the WASMigrationAppInstaller command to install only the applications that were not migrated. See WASMigrationAppInstaller command.

      If the command completed successfully, it is not necessary to check the logs for errors or warnings.

    After the WASPostUpgrade command completes successfully, do not start the new deployment manager. We must complete a few more steps before starting the new deployment manager.

  9. Save the v9.0 migrated deployment manager configuration to a file by running the backupConfig command on the v9.0 deployment manager.

    If we encounter a node migration failure, we can restore the cell configuration to the point before the failure. We can apply remedial actions and the attempt the node migration again.

    1. Change to the deployment_manager_profile_root/bin directory

    2. Run the backupConfig command with the appropriate parameters and save the v9.0 profile configuration to a file. For example:

        version_9_profile_root/profiles/v70toV90dmgr01/bin/backupConfig.sh /mybackup_new_host/v70toV90dmgr01backupMigratedDmgrOnly.zip -username wasadmin -password mypass

      where mybackup_new_host is the location where the configuration restore points are stored.

  10. Stop and disable the deployment manager on the old host.

    1. Stop the deployment manager on the old host.

    2. Disable the deployment manager on the old host. To disable this deployment manager, we must rename the associated serverindex.xml file as indicated in the following information:

        Old name

        $PROFILE_ROOT/config/cells/cell/nodes/deployment_manager_node/serverindex.xml

        New name

        $PROFILE_ROOT/config/cells/cell/nodes/deployment_manager_node/serverindex.xml_disabled

  11. Start the v9.0 deployment manager on the new host.

    1. Change to the new v9.0 deployment_manager_profile_root/bin directory.

    2. Run the startManager command.

    3. While the deployment manager is running, check the SystemOut.log file for warnings or errors.

      IBM recommends using the High Performance Extensible Logging. Check the warnings to see if they affect any node migration or runtime activities when the v9.0 deployment manager is started.

    4. Ensure that the v9.0 deployment manager starts successfully.

  12. Manually synchronize the old nodes to the new v9.0 deployment manager.

    Ensure that the v9.0 deployment manager on the new host is running. We must log into the machine containing the old nodes and run the syncNode command.

    1. Stop the node agent.

    2. Obtain the deployment manager host and port number and update...

        node_agent_profile_root/properties/wsadmin.properties

          Set:

            com.ibm.ws.scripting.host = newhost
          • com.ibm.ws.scripting.port = new port

        • Run:

            cd node_agent_install_root/bin
            syncNode.sh myV90DmgrHost.mycompany.com 8879 -username wasadmin -password mypass

        • Start the node agent if synchronization is successful.

  13. Migrate application client installations.

    If the source WebSphere Application client is Version 7.0, we also must run the WASPreUpgrade and WASPostUpgrade commands to migrate the existing security settings.

    1. Identify all client hosts that we must migrate.

    2. Install the WebSphere v9.0 application client.

    3. Run the v9.0 WASPreUpgrade command to save the Application client security settings to a migration backup directory. For example:

        /opt/AppClientV90/bin/WASPreUpgrade.sh /mybackup_client/v70clientToV90 /opt/AppClientV70

    4. Run the v9.0 WASPostUpgrade command to restore the Application client security settings to the new v9.0 client. For example:

        /opt/AppClientV90/bin/WASPostUpgrade.sh /mybackup_client/v70clientToV90

  14. Migrate nodes.

    Important: These steps apply to cross-machine migrations only. If we are not completing a cross-machine migration of a node, see the information about migrating nodes in Migrate cells using the command-line tools. Ensure that the v9.0 deployment manager is running. For each node that we plan to migrate to v9.0, perform the following steps.

    For the migration to be successful, use the same source node name but a different temporary cell name for each node that we migrate to v9.0 or later.

    1. Install WAS v9.0 onto each target host. See documentation about installing an application-serving environment.

    2. Create the target node profile. Run the manageprofiles command with the appropriate parameters to create a new managed profile. For example:

        cd v9_install_root/bin
        ./manageprofiles.sh -create \
                -profileName node1 \
                -templatePath /opt/WebSphereV90/profileTemplates/managed \
                -nodeName currentNode1Name \
                -cellName tempCellName \
                -hostName mynode1host.company.com

    3. Use the remote migration .jar file that we created for migrating the deployment manager to make the WASPreUpgrade command available on the current node machine.

      This step needs be done only if the source node and deployment manager are not on the same machine, and this step can be done only if the machine architecture is the same. See step 3 of this scenario, Create the remote migration .jar file.

    4. Run the WASPreUpgrade command with the -machineChange true parameter to save the current node configuration to a migration backup directory. Choose a new directory for the backup files. For example:

        <path to remote migration jar>/migration/bin/WASPreUpgrade.sh /mybackup_old_host/v70toV90node1 /opt/WebSphereV70 -oldProfile 70node1 -machineChange true

    5. Check the WASPreUpgrade console output for error and warning messages. We might find the following messages: "Failed with errors" or "Completed with warnings". Also, look in the following log files for error or warning messages:

      • myback_old_host/v70toV90node1/logs/WASPreMigrationSummary.log
      • myback_old_host/v70toV90node1/logs/WASPreUpgrade.timestamp.log
      • myback_old_host/v70toV90node1/logs/WASPreUpgrade.trace

      If the WASPreUpgrade command is successful, we do not need to check the log files for error or warning messages.

    6. Use the archive tool of our choice to create a compressed file of the backup directory that was created by the WASPreUpgrade command. For example:

        cd /mybackup_old_host
        /opt/WebSphereV70/java/bin/jar -cf v70toV90node1.jar v70toV90node1/

    7. Move the archived file to the target machine.

    8. Create a directory on the target machine and extract the archived file to the new directory. For example:

        mkdir /mybackup_new_host
        cd /mybackup_new_host
        /opt/WebSphereV90/java/bin/jar -xf v70toV90dmgr01.jar

      where mybackup_new_host is the directory from which the profile configuration files are migrated.

    9. Stop the application servers on the old node, then stop the node agent on the old node.

    10. Stop and disable the node on the old host. Ensure that we do not use the node on the old host. To disable the node, we must rename the associated serverindex.xml file as indicated in the following information:

      Old name

      $PROFILE_ROOT/config/cells/cell/nodes/node/serverindex.xml

      New name

      $PROFILE_ROOT/config/cells/cell/nodes/node/serverindex.xml_disabled

    11. Run the WASPostUpgrade command to restore the saved node configuration into the new v9.0 managed profile. For example:

        v9_install_root/bin/WASPostUpgrade.sh /mybackup_new_host/v70toV90node1 -profileName v70toV90node1 \
                -oldProfile 70node1 \
                -resolvePortConflicts incrementCurrent \
                -backupConfig TRUE \
                -includeApps TRUE \
                -username wasadmin \
                -password mypass

      If we cloned the deployment manager, we must also clone all federated nodes. Specify the -clone TRUE parameter and the new deployment manager host name and SOAP or RMI port. Do not clone federated nodes unless the deployment manager was cloned.

        v9_install_root/bin/WASPostUpgrade.sh / mybackup_new_host/v70toV90node1 -profileName v70toV90node1 -oldProfile 70node1 -resolvePortConflicts incrementCurrent -backupConfig TRUE -includeApps TRUE -username wasadmin -password mypass -clone TRUE -newDmgrHostName myV90DmgrHost.mycompany.com -newDmgrSoapPort 8879

    12. Check the WASPostUpgrade console output for the following messages. We might find the following messages: "Failed with errors" or "Completed with warnings". Also, look in the following log files for errors or warning messages:

      • mybackup_new_host/v70toV90node1/logs/WASPostMigrationSummary.log
      • mybackup_new_host/v70toV90node1/logs/WASPostUpgrade.target_profile.timestamp.log
      • mybackup_new_host/v70toV90node1/logs/WASPostUpgrade.target_profile.trace

      If the WASPostUpgrade command fails, we might need to restore the v9.0 deployment manager from the backup configuration file. If the WASPostUpgrade command processing ran the syncNode command, then the deployment manager is aware that the node has been migrated. The node cannot be migrated again until the deployment manager has been restored to the state before the node migration.

      If the configuration was migrated correctly but any applications were not installed, we can run the WASMigrationAppInstaller command to install only the applications that were not migrated. See WASMigrationAppInstaller command.

    13. Check the v9.0 deployment manager SystemOut.log file for error or warning messages.

      IBM recommends using the High Performance Extensible Logging (HPEL) log and trace infrastructure . We view HPEL log and trace information using the logViewer .

    14. Start the migrated v9.0 node agent.
    15. Check the v9.0 deployment manager and node SystemOut.log for error or warning messages.

    16. Optional: Synchronize the cell if the auto-synchronization process is not enabled.

    17. Start the appropriate application servers on v9.0 migrated node.

    18. Run the backupConfig command and save the v9.0 profile configuration to a file. For example:

        version_9_profile_root/v70toV90node1/bin/backupConfig.sh /mybackup_new_host/v70toV90node1.zip -username wasadmin -password mypass -nostop

      Each time we run the backupConfig command on a specific node, use a new backup file name.

    19. Save the deployment manager configuration using the backupConfig command. On the v9.0 deployment manager host, change to the deployment_manager_profile_root/bin directory. Run the backupConfig command and save the v9.0 profile configuration to a file. For example:

        version_9_profile_root/v70toV90dmgr01/bin/backupConfig.sh /mybackup_new_host/v70toV90dmgr01backupMigratedDmgrPlusNodeX.zip -username wasadmin -password mypass

      For each node that we migrate, back up the v9.0 deployment manager configuration to a new backup file.

    20. Repeat the previous steps for additional nodes.

  15. Migrate plug-ins for web servers.

    1. Ensure that the v9.0 deployment manager is running.
    2. Update the version of the web server plug-in used in the cell.
    3. For all application servers in the cell to be served by the web server, create a new web server definition

    The WAS ND v9 supports several different web servers, as described in the system requirements. For installation information, see the documentation for your web server type and version.

We used the migration tools to migrate the cell configurations from a previous version of WAS to new host machines that run WAS v9.0.


  • Migrate cells using the command-line tools
  • WASPreUpgrade command
  • WASPostUpgrade command
  • WASMigrationAppInstaller command
  • manageprofiles command
  • Migrate web server configurations
  • backupConfig command
  • restoreConfig command
  • Migration Toolkit on WASdev