WebSphere Batch
Previous | Home | Next


WebSphere Batch

Batch applications are designed to execute long and complex transaction processing that typically executes computationally intensive work. This type of processing requires more resources than traditional online transactional processing (OLTP) systems. Batch applications run as background jobs described by a job control language and use a processing model based on submit, work, and result actions. The execution of batch processes can take hours and the tasks are typically transactional, involving multi-step processes.

WAS V8.5 with WebSphere Batch supplies a unified batch architecture. Using XML job control language (xJCL), WebSphere Batch provides consistent programming and operational models. WebSphere Batch makes use of a batch technology optimized for Java and supports long-running applications. Ensuring agility, scalability, and cost efficiency for enterprises.


Batch jobs

A batch job consists of a series of definitions that direct the execution of one or more batch applications and specifies their input and output. A batch job performs a specific set of tasks in a predefined sequence to accomplish specific business functionalities.

Batch job workloads are executed in a batch container in WAS environments. This batch container is the main engine responsible for the execution of batch applications. It runs batch jobs under the control of an asynchronous bean, which can be thought of as a container-managed thread. The batch container ultimately processes job definitions and carries out the lifecycle of jobs.

WebSphere runs batch applications that are written in Java and implements a WebSphere batch programming model. They are packed as EAR files and are deployed to the batch container hosted in an application server or cluster. Batch applications are executed non-interactively in the background.

Batch applications implement one of two programming models:

Transactional batch These applications handle large amounts of work based on repetitive tasks, such as processing a large number of records.
Compute-intensive applications Compute-intensive applications perform work that requires large amounts of system resources, in particular CPU and memory. The application is responsible for providing all of the logic for performing the necessary work.

Batch jobs can perform a series of tasks that are a combination of transactional and compute-intensive tasks to complete the execution of a batch application.


Batch applications

Batch applications are programs designed to execute non-interactive tasks in the background. Input and output is generally accessed as logical constructs by the batch application and are mapped to concrete data resources by the batch job definition.

Batch applications are Java EE applications consisting of Plain Old Java Objects (POJOs).

These applications conform to a few well-defined interfaces that allow the batch runtime to manage the start of batch jobs designed for the application.

Batch work is expressed as jobs, which are made up of steps that are processed sequentially.

All jobs contain the following information:

Jobs for batch applications contain additional information specific to the batch programming mode:


Elements of the batch environment

A typical batch environment consists of a job scheduler, batch container, batch applications, jobs, interfaces for management functions, and database tables

The following list describes the elements of a batch environment:


Batch programming models

The transactional batch and compute-intensive programming models are both implemented as Java objects. They are packaged into an EAR file for deployment into the application server environment. The individual programming models provide details about how the lifecycle of the application and jobs submitted to it are managed by the grid endpoints. Central to all batch applications is the concept of a job to represent an individual unit of work to be run.

We can mix transactional batch, compute intensive, and native execution job steps. The run time uses a controller the same for every job, regardless of the type of steps the job contains. The controller runs the appropriate logic for the step. These different job step types can also be run in parallel.

More information: WebSphere Batch programming models, see chapter 6 in WAS V8.5 Concepts, Planning, and Design Guide, SG24-8022.


Transactional batch programming model

Batch applications are EJB based Java EE applications. These applications conform to a few well-defined interfaces that allow the batch runtime environment to manage the start of batch jobs destined for the application.

Compute-intensive programming model

Compute-intensive applications are applications that perform intensive computational work that does not fit into the conventional Java EE request and response paradigm due to the following characteristics:

The compute-intensive programming model provides an environment that addresses these needs, centered around two basic concepts:

A compute-intensive application is packaged in an enterprise bean module in a Java EE EAR file. The deployment descriptor for the enterprise bean module must contain the definition of the controller bean. The implementation of the controller bean is provided in the application server runtime. The controller bean allows the runtime environment to control jobs for the application. When a job arrives for the application to run, the compute-intensive execution environment invokes the controller bean. The JNDI name of this stateless session bean is specified in the xJCL for the job.

A compute-intensive application is started by the application server in the same way as other Java EE applications are started. If the application defines any start-up beans, those beans are run when the application server starts.

We can use Java EE development tools, such as Rational Application Developer, to develop and package compute-intensive applications in the same way they are used to construct Java EE applications containing enterprise bean modules and asynchronous beans.


Parallel batch

A transactional batch application can be built as a job and divided into subordinate jobs so the subordinate jobs can run independently and in parallel. Use parallel job manager to submit and manage the transactional batch jobs.

The parallel job manager (PJM) provides a facility and framework for submitting and managing transactional batch jobs that run as a coordinated collection of independent parallel subordinate jobs. The PJM basic features are:

With PJM job management, the top-level job submits the subordinate jobs and monitors their completion. The top-level job end state is influenced by the outcome of the subordinate jobs, as follows:

  1. If all subordinate jobs complete in the ended state, , in a successful completion, the top-level job completes in the ended state.

  2. If any subordinate job completes in the restartable state and no subordinate job ended in the failed state, the top-level job completes in the restartable state.

  3. If any subordinate job completes in the failed state, the top-level job completes in the failed state.

  4. If the top-level job and subordinate jobs are in the restartable state, restart only the top-level job. If any subordinate jobs are restarted manually, the top-level job does not process the logical transaction properly.

The steps in the execution of parallel batch jobs can be executed in different application server instances that are part of the same cluster. The steps are:

  1. First, the xJCL is submitted to the job scheduler, which dispatches the xJCL to an endpoint that runs the application the xJCL references.

  2. The batch container determines the job is to have subordinate jobs running in parallel from inspecting the run property of the job in the xJCL and then delegates the running to the PJM subcomponent.

  3. The PJM invokes the parameterizer API and uses the information in the xJCL to help divide the job into subordinate jobs. The PJM then invokes the LogicalTX synchronization API to indicate the beginning of the logical transaction. The PJM generates the subordinate job xJCL and submits the subordinate jobs to the job scheduler.

  4. The job scheduler dispatches the subordinate jobs to the batch container endpoints so they can run.

  5. The batch container runs the subordinate job. When a checkpoint is taken, the subordinate job collector API is invoked.

  6. This API collects relevant state information about the subordinate job. This data is sent to the subordinate job analyzer API for interpretation.

  7. After all subordinate jobs reach a final state, the beforeCompletion and afterCompletion synchronization APIs are invoked. The analyzer API is also invoked to calculate the return code of the job.

Other aspects to be taken into account to help understand how to optimally use the parallel job manager are:

COBOL support

COBOL has been a part of batch processing since the early days of computers and there is significant investment in mission-critical COBOL assets, especially on mainframes. With WAS V8.5, COBOL support includes the following key features:

The new COBOL container allows COBOL modules to be loaded into the WAS for z/OS address space and invoked directly. It provides the means of direct integration of COBOL resources into WebSphere Java processing. The container itself is implemented as a handful of DLLs and JAR files

The COBOL container enables COBOL modules to be loaded into the batch container where they are invoked directly by the batch application. The COBOL container itself can be created and destroyed multiple times within the lifecycle of a server. Each container is created with Language Environment enclave separate from that of a server. The container is assured of a clean Language Environment each time it is created.

Java programs can pass parameters into COBOL and retrieve the results. The COBOL call stub generator tool is provided to create the Java call stubs and data bindings based on the data and linkage definitions in the COBOL source.

We can dynamically update a COBOL module without having to restart the application server. Further, JDBC Type 2 connections created by the Java program can be shared with the COBOL program under the same transactional context. The COBOL container supports a wide variety of data types beyond integers, including primitive and national data types.

Information: COBOL support on WAS V8.5 Batch

More information is also in section 6.5.3 COBOL support in WAS V8.5 Concepts, Planning, and Design Guide, SG24-8022.


Batch toolkit

IBM provides two features to help with the development of batch applications:

The batch toolkit contains the following components:


Configuring the batch environment

Configuration tasks for the batch environment include configuring the job scheduler and grid endpoints.

To set up an environment to host transactional batch or compute-intensive job types, you must deploy the job scheduler and the batch container to at least one WebSphere application server or cluster. The transactional batch, compute-intensive applications, or both are installed on the same WebSphere application server or cluster.

The job scheduler and batch container both require access to a relational database. Access to the relational database is through the underlying WAS connection management facilities. The relational databases supported are the same as those relational databases that are supported by WAS, including DB2, Oracle, and others. The simple file-based Apache Derby database is automatically configured for you by default so that we can quickly get a functioning environment up and running. However, do not use the Derby database for production use. Moreover, the default Derby database does not support a clustered job scheduler, nor a clustered batch container.

A highly-available environment includes both a clustered job scheduler and one or more clustered batch containers. Clustering requires a network database. Use production grade databases, such as DB2 for this purpose.


Configure the job scheduler

The job scheduler accepts job submissions and determines where to run them. As part of managing jobs, the job scheduler stores job information in an external job database.

Configuration for the job scheduler includes the selection of the deployment target, data source JNDI name, database schema name, and endpoint job log location to be configured for the scheduler.

We can use the command-line interface, the EJB interface, the web services interface, and the job management console to communicate with the job scheduler.

Stand-alone application servers or clusters can host the job scheduler. The first time a server or cluster is selected to host the grid scheduler, an embedded Apache Derby database is automatically created and configured to serve as the scheduler database if the default data source JNDI name jdbc/lrsched is selected.

Although Derby is used as the default job scheduler database, you might want to use our own database.


Secure the job scheduler

WebSphere authentication determines the users, from the active WebSphere security registry, that can authenticate and gain access to the web, command line, and programmatic interfaces of the job scheduler. Therefore, we can secure the job scheduler application by simply enabling global security and application security.

Application security secures the job management console. The job scheduler application uses a combination of both declarative and instance-based security approaches to secure jobs and commands.

Finally, security for the batch environment is based on two basic principles of WebSphere security:


Job scheduler integration with external schedulers

Many customers already use an external workload scheduler to manage batch workloads on the z/OS operating system. While a Java batch running inside a WAS environment is attractive, a way to control batch jobs through an external workload scheduler is important.

We can integrate the job scheduler with an external workload scheduler by configuring and securing the job scheduler, enabling the interface, and running batch jobs with the WSGrid utility.


External scheduler integration

Because an external scheduler does not know how to directly manage batch jobs, a proxy model is used. The proxy model uses a regular JCL job to submit and monitor the batch job.

The JCL job step invokes a special program provided by batch, named WSGRID. The WSGRID application submits and monitors a specified batch job, writing intermediary results of the job into the JCL job log. WSGRID does not return until the underlying job is complete, consequently providing a synchronous execution model. Because the external scheduler can manage JCL jobs, it can manage a JCL job that invokes WSGRID. Using this pattern, the external scheduler can indirectly manage a job.

An optional plug-in interface in the job scheduler enables a user to add code that updates the external scheduler operation plan to reflect the unique state of the underlying job, such as job started, step started, step ended, job ended. The WSGRID program is written with special recovery processing so that if the JCL job is canceled, the underlying job is canceled also, thus ensuring synchronized lifecycle of the two jobs.

Job control by an external workload scheduler for the z/OS platform environment. In this diagram, the Tivoli Workload Scheduler is shown as an example workload scheduler, communicating with the z/OS Job Entry Subsystem (JES).


Configuring the external scheduler interface

We can configure an external scheduler interface to control the workload for batch jobs. To communicate with the external scheduler interface, we can use:


Configure grid endpoints

To set up a WebSphere grid endpoint:

  1. Install a batch application on a server or cluster using the dmgr console, wsadmin commands, or another supported method for deploying applications.

  2. If the application is the first batch application installed on the server or cluster, restart the server or cluster.

The WebSphere grid endpoints are automatically set up. By installing the application on the deployment target, the common batch container is automatically deployed on the server or cluster selected using the default Apache Derby data source.

The default file-based Derby data source can be used only when using the batch function on a stand-alone application server. If we have a WAS ND environment, use a network database.


Configure the job scheduler and job management console

The job management console is a stand-alone web interface for managing jobs. It runs on a target application server or cluster within a WebSphere cell, more specifically, in the cluster or application server where the job scheduler is enabled.

With the job management console we can:

Some of the specific actions that we can execute through the job management console are:

When role-based security is enabled, be granted the lrsubmitter role, the lradmin role, or the lrmonitor role through the dmgr console to access the job management console.

When the security enabled is based on the group and the role, be in the appropriate group and the appropriate role to access the job management console. You must be in the user group of the job or the administrative group. You must also be in the lrsubmitter role, the lradmin role, or the lrmonitor role.

To access the job scheduler from the job management console:

  1. Configure the job scheduler.

  2. Ensure the job scheduler is running.

    If the application server or cluster members on which the job scheduler is installed have the started icon in the status field, the job scheduler is usually running. We can verify whether the job scheduler started by checking the log files.

  3. In a browser, type the web address:

      http://<job scheduler server host>:<port>/jmc

    If an on-demand router (ODR) is defined in the cell, type the web address:

      http://<odr host>:80/jmc

  4. If we cannot access the job management console, check the appropriate log. If you specified a server in the web address, check the server log. If you specified a cluster member in the web address, check the cluster member log.


Command-line interface for batch jobs

The command-line interface interacts with the job scheduler to submit and manipulate a batch job. It is located in the app_server_root/bin directory as the lrcmd.sh or lrcmd.bat script and can be started from any location in the WebSphere cell.

The following examples illustrate the use of the lrcmd script on a UNIX system:

Information: command line interface for batch jobs


Job logs

A job log is a file containing a detailed record of the execution details of a job. System messages from the batch container and output from the job executables are collected. By examining job logs, we can see the lifecycle of a batch job, including output from the batch applications themselves.

A job log is composed of the following three types of information:


Job classes

Each job is assigned to a job class, which defines policies to limit resource consumption by batch jobs. Job classes can be configured using the dmgr console.

Job classes establish policies for:


Example: Working with batch applications

The conditions for this scenario are:

  1. A stand-alone application server called itsoBatch that hosts the job scheduler and batch application.

  2. The WAS install path (WAS_HOME) is

      /opt/WAS/AppServer.

  3. The Java Batch IVT Sample is used. The sample is stored in /tmp/sample_ivt (referred to as <unzipped_sample_dir>).


Enabling the job scheduler

After the itsoBatch application server is created, the job scheduler can be configured using the dmgr console. The following instructions provide an example of how to configure the job scheduler using the dmgr console:

  1. Log on to the dmgr console.

  2. To view the Job scheduler page.

    1. In the Scheduler hosted by list, select the deployment target.

    2. Type the database schema name. The default is LRSSCHEMA.

    3. Select the data source JNDI name from the list. If the default of (none) is selected, a default embedded Derby job scheduler database is created with a value of jdbc/lrsched.

    4. Type the directory where the job scheduler and the batch execution environment write the job logs. The default is...

        ${GRID_JOBLOG_ROOT}/joblogs

    5. Optional: Select the record usage data in scheduler database option to specify if the scheduler records job usage data for charge-back purposes in the scheduler database.

    6. Click OK and save the configuration.

    7. If administrative security is enabled, enable application security and secure the job scheduler


Verifying the job scheduler installation

To verify the job scheduler is installed correctly:

  1. Restart the application server (or cluster members) where the job scheduler is configured.

    If the application server (or cluster members) on which the job scheduler is installed has the started icon in the status field, the job scheduler is active. We can verify whether the job scheduler started by checking the log files.

  2. Access the job management console through a web browser by typing: http://job_scheduler_server_host:grid_host/jmc

    The grid_host port is the WC_defaulthost port for the server running the job scheduler. To find this port, go to your server in the dmgr console, expand ports, and look for WC_defaulthost. In the case of our test environment, the URL is http://saw211-RHEL3:9080/jmc.

    To ensure the job management console is working correctly, check the SystemOut.log file on the target application server configured to host job scheduler. Example 22-3 shows the message in the log the application as started.

    [6/20/12 12:04:48:739 EDT] 0000006c JobSchedulerS I CWLRB3220I: Long Running Job Scheduler is initialized
    


Installing the sample batch application

To install the IVT sample batch application into WAS:

  1. Download the sample IVT application compressed file from the WAS V8.5 Information Center website.

  2. Unzip the file on your target server.

  3. Configure the JAVA_HOME and add it to the PATH environment variables in the system to be able to run java and create the IVT database on Derby.

  4. Run the command java -version to verify Java v1.6 or later is installed and defined in the system’s path.

  5. Create the IVTDB database in the server where the target application server is running.

    Use the appropriate CreateIVTTables DDL file (CreateIVTTablesDerby.ddl) located in the <unzipped_sample_dir>/IVT/scripts directory of the uncompressed IVT sample application: From a command prompt, issue the following commands:

      cd WAS_HOME/derby/databases
      java -Djava.ext.dirs=WAS_HOME/derby/lib -Dij.protocol=jdbc:derby:org.apache.derby.tools.ij<unzipped_sample_dir>/scripts/CreateIVTTablesDerby.ddl

  6. Create the JDBC resources:

    1. In the dmgr console, click...

    2. Create a JDBC XA provider at the server scope with the following properties:

      • Database type: Derby
      • Provider type: Derby JDBC Provider
      • Implementation type: XA data source
      • Name: XDCGIVT Derby JDBC Provider (XA)
      • Description: Accept the default value

    3. Click through the remaining panels. On the last panel, click Finish.

    4. In the dmgr console, click...

    5. Create a data source with the following properties:

      • Data source name: XDCGIVT data source (XA)
      • JNDI name: jdbc/IVTdbxa
      • JDBC provider: XDCGIVT Derby JDBC Provider (XA)
      • Database name: WAS_HOME/derby/databases/IVTDB
      • Select the option Use this data source in container managed persistence (CMP).
      • Security aliases: Accept the default values.

    6. Click through the remaining panels. On the last panel, click Finish.

    7. Save the configuration to the master repository and synchronize the nodes.

  7. Select the new data source, and click Test Connection to test the connection to the database.

  8. Install the XDCGIVT sample using the dmgr console:

    1. In the dmgr console, click...

        Applications | New application | New Enterprise Application

    2. Specify the full path to the sample XDCGIVT.ear file(<unzipped_sample_dir>/installableApps/XDCGIVT.ear).

    3. In the wizard, select Fast Path -Prompt only when additional information is required, accept default settings, apply the proper modules mapping, and continue through the steps.

      When mapping modules of the batch application to servers, select the server (or cluster) to run the batch job (itsoBatch). Click Finish when you are done. d. Restart the application server.

  9. After the application server is restarted, verify the application installed successfully:

    1. Go to the Enterprise applications dmgr console page by clicking...

        Applications | Application Types | WebSphere enterprise applications

    2. If the application is not running, select the application, and click Start.


Secure the job scheduler using Job groups

We can secure the job scheduler using groups. A user can then act on a job only if the user and job are members of the same group.

This example assumes the job scheduler is configured and that WebSphere security is enabled. It also assumes that a group was created and a user that belongs to the group. For this example, the user ID is user1 and the group is BATCHGROUP. Group security is enabled for the job scheduler by mapping authenticated users to the lradmin administrative security role. The next step is to assign a group to a job.

We can use the dmgr console to enable job group security for the job scheduler with the following procedure:

  1. Enable job group security for the job scheduler:

    For the purpose of this scenario, we mapped the wasadmin, our primary WebSphere administrative ID to the lradmin role, and a user group, BATCHGROUP, to the lrsubmitter role. This allows you to access the job scheduler console with different roles and understand the difference of permissions and possibilities each role provides. There is also the lrmonitor role that can be used for ID and group mapping:

    1. Click...

    2. Select lradmin for the role, and click Map Users:

      1. In the Job scheduler | Security role to user/group mapping | Map users/groups window, click Search to list all users.

      2. Select your primary WebSphere administrative ID from the Available listview, which in this example is wasadmin, and click the right arrow button to add the selected user to the Selected list

      3. Click OK.

    3. Select the lrsubmitter role, and click Map Groups.

      1. In the Job scheduler | Security role to user/group mapping | Map users/groups window, click Search to list all groups.

      2. Select the BATCHGROUP from the Available listview, and click the right arrow button to add the selected user to the Selected listview.

      3. Click OK.

    4. Save the updates.

    5. Restart the server.

    6. Verify that group security is enabled. A message in the SystemOut.log file of the application server (in our case the batchJVM01 application server) indicates that group security is enabled.
      CWLRB5837I: The WAS Batch Feature is running under GROUP
      security policy 


Use the job management console

After the job scheduler is enabled with the proper security settings and you completed the deployment of the sample IVT batch application, we can use the job management console to perform administrative tasks for the batch job.

The following URL is used to access the job management console:

For comparison, access the job management console with the user ID that was mapped to the lradmin role, which in our case is the wasadmin ID. Note the full range of functionality available.

Next, access the console with the user ID mapped to the lrsubmitter role, which in this test case is user1. Note the noticeable difference between the lradmin and the lrsubmitter permissions. The user1 ID has restricted access in the console.


Submitting a job

The sample IVT application contains a few batch jog xJCL files:

To test these batch jobs, first edit these files and set the following parameters to a valid location on your test server:

Use the command-line interface for batch jobs

The command-line interface interacts with the job scheduler to submit and manipulate a batch job.


Showing the status of batch jobs

To list the status of previously submitted jobs:

  1. Connect to the server where the job scheduler is active.

  2. cd WAS_HOME/bin.

  3. Execute the following command:

      ./lrcmd.sh -cmd=status -host=<job_scheduler_host> -port=<job_scheduler_port> -userid=<userid_job_scheduler> -password=<password_job_scheduler>

    Example:

      ./lrcmd.sh -cmd=status -host=saw211-RHEL3 -port=9080 -userid=user1 -password=batch

Output from the lrcmd command for status of batch jobs

CWLRB4940I: com.ibm.ws.batch.wsbatch : -cmd=status -host=saw211-RHEL3 -port=9080 -userid=user1 -password=******** 
CWLRB5000I: Wed Jun 27 18:07:10 EDT 2012 : com.ibm.ws.batch.wsbatch : response to status 
CWLRB3060I: [2012-06-27 15:07:16.459] [XDCGIVT:00000] [pending submit] [Batch] [user1] [] []
CWLRB3060I: [2012-06-27 17:42:50.050] [XDCGIVT:00004] [ended] [Batch] [user1] [saw211-RHEL3Node01] [itsoBatch]
CWLRB3060I: [2012-06-27 17:42:55.314] [XDCGIVT:00005] [pending submit] [Batch] [user1] [] []
CWLRB3060I: [2012-06-27 17:44:05.537] [XDCGIVT:00006] [pending submit] [Batch] [user1] [] []
CWLRB3060I: [2012-06-27 17:52:39.120] [XDCGIVT:00007] [pending submit] [Batch] [user1] [] [] 
CWLRB3060I: [2012-06-27 18:06:30.249] [XDCGIVT:00008] [pending submit] [Batch] [user1] [] []


Viewing details of job schedules

To view the details of previously created job schedules:

  1. Connect to the server where the job scheduler is active.

  2. cd WAS_HOME/bin.

  3. Execute the following command:

      ./lrcmd.sh -cmd=getRecurringRequestDetails -request=<request_name> -host=<job_scheduler_host> -port=<job_scheduler_port> -userid=<userid_job_scheduler> -password=<password_job_scheduler>

    For example:

      ./lrcmd.sh -cmd=getRecurringRequestDetails -request=weeklySchedule -host=saw211-RHEL3 -port=9080 -userid=wasadmin -password=need2reset

      Output from the lrcmd command showing details of batch job schedule

        [root@saw211-RHEL3 bin]# ./lrcmd.sh -cmd=getRecurringRequestDetails -request=weeklySchedule -host=saw211-RHEL3 -port=9080 -userid=wasadmin -password=need2reset
        CWLRB4940I: com.ibm.ws.batch.wsbatch : -cmd=getRecurringRequestDetails -request=weeklySchedule -host=saw211-RHEL3 -port=9080 -userid=wasadmin -password=********
        CWLRB5000I: Thu Jun 28 09:27:13 EDT 2012 : com.ibm.ws.batch.wsbatch : response to getRecurringRequestDetails CWLRB5430I: [weeklySchedule] [2012-07-01 22:00:00] [weekly] [inputDataStream="/tmp/ivtJobs/input-text.txt" supportclassOut="com.ibm.websphere.batch.devframework.datastreams.patterns.TextFil eWriter" checkPoint="10" perfEnabled="true" numberRecords="100" outputDataStream="/tmp/ivtJobs/output-text.txt" supportclassIn="com.ibm.websphere.batch.devframework.datastreams.patterns.TextFile Reader" debugEnabled="false" fileEncoding="8859_1"] []


Checking the batch job logs

After the job is submitted from the Job management console, we can verity the logs to determine if the execution was successful.

The job logs are available in the following folder:

There are two logs to be verified:

  • part.0.log

    This log shows the initial load and dispatch information for the job, including the dispatch to the grid endpoint that will execute the process. Example 22-7 shows a snippet of the log file.

    part.0.log file

    CWLRB5684I: [06/20/12 14:22:23:411 EDT] Job XDCGIVT:00000 is queued for execution
    CWLRB5586I: [06/20/12 14:22:23:484 EDT] CWLRS6006I: Job class Default,
    Importance 8, Service Class null, Service Goal Type 0, Application Type j2ee,
    Submitter user1.
    CWLRB5586I: [06/20/12 14:22:23:484 EDT] CWLRS6007I: Job Arrival Time 6/20/12
    
    
    2:22 PM, Goal Max Completion Time 0, Goal Max Queue Time 0, Breach Time 6/21/12
    2:22 PM.
    CWLRB5586I: [06/20/12 14:22:23:485 EDT] CWLRS6021I: List of eligible endpoints
    to execute the job: saw211-RHEL3Node01/batchJVM01.
    CWLRB5586I: [06/20/12 14:22:23:486 EDT] CWLRS6011I: APC is not active. GAP will make the endpoint selection.
    CWLRB5586I: [06/20/12 14:22:24:863 EDT] CWLRS6013I: GAP is dispatching job
    XDCGIVT:00000. Job queue time 1.399 seconds.
    CWLRB3090I: [06/20/12 14:22:25:440 EDT] Job XDCGIVT:00000 is dispatched to endpoint saw211-RHEL3Node01/batchJVM01: result: 0
    

  • part.2.log

    This output includes any application generated output directed to the System.out and System.err output streams. Example 22-8 shows a snippet of the log file.

    CWLRB5610I: [06/20/12 14:22:30:423 EDT] Firing IVTStep3 results algorithm com.ibm.wsspi.batch.resultsalgorithms.jobsum: [RC 0] [jobRC 0]
    CWLRB5624I: [06/20/12 14:22:30:503 EDT] Stopping step IVTStep3 chkpt checkpoint. User transaction status: STATUS_ACTIVE
    CWLRB5602I: [06/20/12 14:22:30:607 EDT] Closing IVTStep3 batch data stream: inputStream
    CWLRB5602I: [06/20/12 14:22:30:608 EDT] Closing IVTStep3 batch data stream: generatedOutputInputStream
    CWLRB5604I: [06/20/12 14:22:30:609 EDT] Freeing IVTStep3 batch data stream: inputStream
    CWLRB5604I: [06/20/12 14:22:30:609 EDT] Freeing IVTStep3 batch data stream: generatedOutputInputStream
    CWLRB5854I: [06/20/12 14:22:30:610 EDT] Job Step [XDCGIVT:00000,IVTStep3]: Metric = clock Value = 00:00:00:005
    CWLRB5854I: [06/20/12 14:22:30:611 EDT] Job Step [XDCGIVT:00000,IVTStep3]: Metric = retry Value = 0
    CWLRB5844I: [06/20/12 14:22:30:611 EDT] Job Step Batch Data Stream [XDCGIVT:00000,IVTStep3,generatedOutputInputStream]: Metric = skip Value = 0
    CWLRB5844I: [06/20/12 14:22:30:612 EDT] Job Step Batch Data Stream [XDCGIVT:00000,IVTStep3,generatedOutputInputStream]: Metric = rps Value = 484,027
    CWLRB5844I: [06/20/12 14:22:30:613 EDT] Job Step Batch Data Stream [XDCGIVT:00000,IVTStep3,inputStream]: Metric = skip Value = 0
    CWLRB5844I: [06/20/12 14:22:30:614 EDT] Job Step Batch Data Stream [XDCGIVT:00000,IVTStep3,inputStream]: Metric = rps Value = 428,816
    CWLRB2600I: [06/20/12 14:22:30:614 EDT] [06/20/12 14:22:30:614 EDT] Job [XDCGIVT:00000] Step [IVTStep3] completed normally rc=0.
    CWLRB3800I: [06/20/12 14:22:30:621 EDT] Job [XDCGIVT:00000] ended normally.