Virtual application patterns

+

Search Tips   |   Advanced Search

  1. Overview
    1. Components
    2. Application components
    3. Database components
    4. Messaging components
    5. OSGi components
    6. Transaction processing components
    7. Install the CICS resource adapter
    8. User Registry components
    9. Other components
    10. Policies
  2. Types
    1. Import pattern types
    2. Delete pattern types
    3. Upgrade pattern types
    4. Upgrade deployed pattern types
    5. View pattern types
    6. View plug-ins in pattern types
  3. Supported virtual application pattern types
    1. IBM Database Patterns
      1. Get started
      2. Work with databases
      3. Work with database patterns
    2. IBM Web Application Pattern
      1. Get started
      2. Work with deployed Web applications
      3. Troubleshoot and support for the Web Application Pattern
    3. IBM Application Pattern Type for Java
      1. The Application Pattern Type for Java
      2. Get started with the Application Pattern Type for Java
      3. Application Pattern Type for Java virtual application components and policy
  4. Manage virtual application patterns
    1. Create virtual application patterns
    2. Create virtual application layers
    3. Import virtual application patterns
    4. Import virtual applications as layers
    5. Modify virtual application patterns
    6. Modify virtual application layers
    7. Delete virtual application patterns
    8. Delete virtual application pattern layers
    9. Configure components and policies
    10. Deploy virtual application patterns
    11. Clone virtual application patterns
    12. Export virtual application patterns
  5. Enable and upgrade Image Construction and Composition Tool
    1. Enable for the first time
    2. Enable after deletion
  6. Work with virtual application pattern plug-ins
    1. Plug-ins included with pattern types
    2. Administer system plug-ins
      1. System plug-ins overview
      2. Add, modify, and delete system plug-ins
    3. Plug-in development guide
      1. Plug-in Development Kit
      2. Plug-in development overview
      3. Plug-in development reference
        1. Kernel services
        2. Shared services
        3. Application modeling
        4. Deployment
        5. Define plug-in variables
        6. Develop lifecycle scripts
        7. Application model and topology document examples
        8. Troubleshoot and monitoring services
        9. Pattern type packaging reference
        10. Plug-in validation
      4. Plug-ins for development
      5. Other configuration options
      6. Samples
      7. Set up the plug-in samples environment
      8. Sample: Import and deploy the sample pattern types
      9. Sample: Developing a plug-in and pattern type with Eclipse
      10. Sample: Creating a virtual application
      11. Sample: Creating a plug-in project from the command line
      12. Sample: Building a single plug-in and pattern type with the command-line tools


Work with virtual application patterns

A virtual application is defined by a virtual application pattern. It is a complete set of platform resources that fulfill a business need, including web applications, databases, user registries, messaging services, and transaction processes.

Each virtual application pattern is associated with a pattern type, which is a collection of plug-ins that provide these resources and services for a particular business purpose in the form of components, links and policies. The pattern types, product extensions of the cloud system, and the types of virtual application that you build depend on the pattern types that you have enabled.


Virtual application patterns overview

IBM PureApplication System W1500 provides support for creating application-centric deployments called virtual applications. Virtual application builders and deployers describe virtual applications in terms of the application artifacts and required quality of service levels. PureApplication System determines the infrastructure and middleware that is required to host the virtual application at deployment time.


Virtual application patterns

PureApplication System provides a generic framework for designing, deploying, and managing virtual applications. The model that you build by using the application artifacts and quality of service levels is called a virtual application pattern. You can use predefined patterns, extend existing patterns, or create new ones.

When you build a virtual application pattern, you create the model of a virtual application by using components, links, and policies.

Consider an order management application with the following requirements:

A virtual application builder can use PureApplication System to create a virtual application pattern by using components, links, and policies to specify each of these parameters.

Component

Represents an application artifact such as a WAR file, and attributes such as a maximum transaction timeout. In terms of the order management application example, the components for the application are the WAS nodes and the DB2 nodes. The WAS components include the WAR file for the application, and the DB2 components connect the application to the existing DB2 server.

Link

A connection between two components. For example, if a web application component has a dependency on a database component, an outgoing link from the web application component to the database component defines this dependency. In terms of the order management application example, links exist between the WAS components and the DB2 components.

Policy

Represents a quality of service level for application artifacts in the virtual application. Policies can be applied globally at the application level or specified for individual components. For example, a logging policy defines logging settings and a scaling policy defines criteria for dynamically adding or removing resources from the virtual application. In terms of the order management application example, a Response Time Based scaling policy is applied that scales the virtual application in or out to keep the web response time 1000 - 5000 ms.

When you deploy a virtual application, the virtual application pattern is converted from a logical model to a topology of virtual machines deployed to the cloud. Behind the scenes, the system determines the underlying infrastructure and middleware that is required for the application, and adjusts them as needed to ensure that the quality of service levels that are set for the application are maintained. A deployed topology that is based on a virtual application pattern is called a virtual application instances. You can deploy multiple virtual application instances from a single virtual application pattern.

The components, links, and policies that are available to design a particular virtual application pattern are dependent on the pattern type that you choose and the plug-ins that are associated with the pattern type.


Virtual application pattern types and plug-ins

A pattern type represents a collection of related components, links, and policies used to build a set of virtual applications. A pattern type defines a virtual application domain. For example, the IBM Web Application Pattern pattern type defines a domain in which J2EE web applications are deployed. It includes components for WAR, EAR, and OSGiEBA files. These components have an attribute for the appropriate archive file, which an application builder specifies during construction of the virtual application pattern.

The web application can connect to a database, so the pattern type also includes a component to represent the database and provides its connection properties as attributes. The pattern type also defines a link between the database and the WAR file to represent communication between the application and the database.

The application components (WAR, EAR, OSGiEBA) can all be configured with quality of service levels by applying policies. The available options include scaling, routing, logging, and JVM policies.

The plug-ins that are associated with Web Application Pattern define these components, links, and policies. They also provide the underlying implementation to deploy the virtual applications in the cloud and perform maintenance on deployed virtual application instances.

Virtual application builders create virtual application patterns in the Virtual Application Builder. Within Virtual Application Builder, you begin by selecting the pattern type to use. This choice determines the set of components, links, and policies that you can use to build the virtual application and the type of virtual applications that you can create.

Plug-in developers are responsible for creating or customizing pattern types and plug-ins that control the available components, links, and policies and corresponding configuration options, as well as the code for implementing deployments.


Options for reusing virtual application configuration

To simplify complex application design and standardize reuse of common components and configuration across multiple virtual applications, you can use several options:

virtual application templates

A predefined virtual application pattern that can include components that are already pre-configured. You can use a template as a foundation for creating virtual application patterns that use the same basic configuration. Alternatively, you can deploy a virtual application directly from a template and specify any required settings at deployment time.

virtual application layers

A grouping of components within a virtual application pattern. You can set up multiple layers within a single virtual application pattern or you can import one virtual application pattern into another as a reference layer.

Reusable components

You can pre-configure a virtual application component and save it for reuse by any virtual application pattern based on the same pattern type.


Virtual application pattern components

A virtual application pattern contains components that represent middleware services required by the virtual appliance instance.

You can connect components in a virtual application pattern to indicate dependencies and optionally apply a policy to configure middleware services during deployment to configure specific behavior or define a quality of service level. Components, links, and policies can have required and optional attributes.

Components, links, and policies are defined by plug-ins. When you create a virtual application pattern, the available components, links, policies, and configuration options are determined by the plug-ins included with the selected pattern type.


Components

The following components are available with the virtual application patterns provided with IBM PureApplication System W1500.


Policies

You can optionally apply policies to a virtual application to configure specific behavior in the deployed virtual application instance. Two virtual applications might include identical components, but require different policies to achieve different service level agreements. For example, if you want a web application to be highly available, you can add a scaling policy to the web application component and specify requirements such as a processor usage threshold to trigger scaling of the web application. At deployment time, the topology of the virtual application is configured to dynamically scale the web application. Multiple WAS instances are deployed initially for the web application and instances are added and removed automatically based on the service levels that are defined in the policy.

Policies can be applied only to particular types of components. For more information, see the following links:


Application components


Additional archive file

The additional archive file component is for your primary archive.

The following attributes are required for an additional archive file:


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component that represents an execution service for Java EE web archive (WAR files).

  • Provider policy set binding
  • Service name
  • Binding file
  • Key store
  • Trust store (encryption)
  • Trust store (digital signature)

Enterprise application (WAS) An enterprise application (WAS) cloud component that represents an execution service for Java EE enterprise archive (EAR files).

  • Provider policy set binding
  • Service name
  • Binding file
  • Key store
  • Trust store (encryption)
  • Trust store (digital signature)


Enterprise application component

The enterprise application (WAS) component that represents an execution service for Java EE EAR files.

You cannot use an enterprise application that includes Container Managed Persistence V 2.0 beans. This type of application requires deployment tools that are not included in this product.s WAS binary files. The following are attributes for an enterprise application:

EAR file Enterprise archive (.ear) file to be uploaded. Required. Java EE 5 and later applications are supported; however, the application must provide an application.xml file in the EAR file if the pattern uses a proxy shared service.
Total transaction lifetime timeout Default maximum time, in seconds, allowed for a transaction that is started on this server before the transaction service initiates timeout completion. Any transaction that does not begin completion processing before this timeout occurs is rolled back. The default is 120 seconds.
Asynchronous response timeout Amount of time, in seconds, that the server waits for responses to WS-AT protocol messages. The default is 120 seconds.
Client inactivity timeout Maximum duration, in seconds, between transactional requests from a remote client. Any period of client inactivity that exceeds this timeout results in a rollback of the transaction in this application server. The default is 60 seconds.
Maximum transaction timeout Specifies, in seconds, the maximum transaction timeout for transactions that run in this server. This value is greater than, or equal to, the value that is specified for the total transaction timeout. The default is 300 seconds.
Interim fixes URL Location or URL of the selected interim fixes. This URL is used by the WAS virtual machine to download interim fixes for updating your environment.


Policies


Policy components for enterprise applications

Policy name Description
Scaling policy (web or enterprise application) Scaling is a run time capability to automatically scale your application platform as the load changes. A scaling policy component defines this capability and the conditions under which scaling activities occur for your application.
Routing policy (web, enterprise, or OSGi enterprise bundle archive (EBA) application) Routing policy for a web application, enterprise application, or an OSGi EBA application.
Log policy (web or enterprise application) A policy to specify configuration for log record files.
JVM policy (web or enterprise application) A policy to control features of the underlying Java virtual machine (JVM).


Incoming connectable components

Component name Description
Enterprise application (WAS) An enterprise application (such as a WAS application) cloud component that represents an execution service for EAR files.
Web application (WAS) A web application cloud component that represents an execution service for WAR files.


Outgoing connectable components

Component name Description Connection
Topic (WebSphere MQ) A topic that represents a message destination on a IBM WebSphere MQ messaging service through which messages are published and subscribed.

If you purchased and enabled the Messaging Extension for Web Application Pattern pattern type, you can connect to either an external WebSphere MQ messaging service or a WebSphere MQ messaging service deployed by using the Messaging Extension for Web Application Pattern.

  • JNDI name
  • Resource environment references
  • Message destination references

Additional archive file An additional archive file component for your primary archive.
Existing messaging service (WebSphere MQ) A messaging service that represents a connection to an external messaging system such as WebSphere MQ.

  • JNDI name of the Java Message Service (JMS) connection factory
  • Resource references of the JMS connection factory
  • Client ID

Policy set A component used to define quality of service policies.
Existing database (Oracle) An existing Oracle database component that represents a connection to an existing Oracle database instance that runs remotely outside of the cloud. The configuration properties allow a connection to be made to the remote Oracle database.
Connect Out A component used to open the firewall for outbound TCP connections from a web or enterprise application to a specified host and port.
Connect In A component used to open the firewall for inbound TCP connections from a specified address or range of addresses, to a specified port in the target application component.
Database (DB2) A database (DB2) component that represents a pattern-deployed database service.

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source
  • Minimum connections
  • Maximum connections
  • Connection timeout

Existing database (DB2) An existing DB2 database component that represents a connection to a remote DB2 database instance that runs remotely outside of the cloud. The configuration properties allow a connection to be made to the remote DB2 database.
Existing database (Informix) An existing Informix database component that represents a connection to a remote Informix database instance that runs remotely outside of the cloud. The configuration properties allow a connection to be made to the remote Informix database.
Existing CICS Transaction Gateway (CTG) An existing CTG component that represents a connection to an existing CTG instance that runs remotely outside of the cloud. The configuration properties allow a connection to be made to the CTG.
Existing IMS. database Existing Information Management Systems (IMS) database system.
Existing user registry (IBM Tivoli Directory Server) An existing user registry (such as Lightweight Directory Access Protocol (LDAP)) cloud component that represents a pattern-deployed LDAP service that can be deployed by itself, or attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container-managed security.

  • User filter
  • Group filter
  • Role name
  • User role mapping
  • Group role mapping
  • Special subject mapping

Existing user registry (Microsoft Active Directory) An existing user registry, such as LDAP, that represents an existing LDAP service that can be attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container managed security.
User registry (Tivoli Directory Server) A user registry (such as Tivoli Directory Server) cloud component that represents a pattern-deployed LDAP service that can be deployed by itself, or attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container-managed security.
Existing IMS TM Existing IMS Transaction Manager (IMS TM)
Enterprise application (WAS) A web application (such as a WAS application) cloud component that represents an execution service for Java EE enterprise applications (EAR files).
Web application (WAS) A web application (such as a WAS application) cloud component that represents an execution service for WAR files.
Existing web service provider endpoint A web service provider that is provided by a remote server.
Queue (WebSphere MQ) A message queue on a WebSphere MQ messaging service through which messages are sent and received.

If you purchased and enabled the Messaging Extension for Web Application Pattern pattern type, you can connect to either an external WebSphere MQ messaging service or a WebSphere MQ messaging service deployed by using the Messaging Extension for Web Application Pattern.

  • JNDI name
  • Resource environment references
  • Message destination references

To make a connection between a component and the enterprise application, hover over the blue circle on the enterprise application component part on the canvas. When the blue circle turns yellow, draw a connection between the enterprise application and component.

Use the property panel to upload the EAR files. To make associations with other services, create a link to the corresponding virtual application pattern component. Currently, support is limited to one database and one user registry connection. A scaling policy object might be attached to specify a highly available pattern.

CAUTION:

When you use an enterprise application to deploy a database component, you define the database schema in the SQL file. Do not add the connect to <dbname> statement in the SQL file. The schema object used is db2inst1 and not appuser.

In addition to uploading your EAR file, you can upload more files, such as a compressed file that contains configuration details or other information. When the WebSphere process starts, the compressed file is extracted to a directory and the icmp.external.directory system property is set. If you attach a scaling policy to the web application component, each virtual machine contains a copy of the compressed file, and any updates that are made to the file or directory on one virtual machine are not reflected in the copy of the file on another virtual machine.

By default, the application is available at http://{ip_address}/{context_root}, where:


Existing Web Service Provider Endpoint

A web service provider endpoint is a web service provider that is provided by a remote server.

To use the web service plug-in in IBM PureApplication System W1500, the web service client must be updated as follows:

  1. Include the web service WSDL file in the application package. The WSDL file is in WEB-INF/wsdl directory.
  2. Update the web service client source code to specify the location of the WSDL file. For example: "file:/WEB-INF/wsdl/{WSDL_FILE_NAME}"

The following is a required attribute for a web service provider endpoint component:


Connections


Incoming connectable components

Component name Description Connection properties
Enterprise application (WAS) An enterprise application (WAS) cloud component that represents an execution service for Java Platform, Enterprise Edition (Java EE) enterprise archive (EAR files).

  • Service name

Web application (WAS) A web application (WAS) cloud component that represents an execution service for Java EE web archive (WAR files).

  • Service name

To make a connection between a component and the enterprise application, hover over the blue circle on the enterprise application component part on the canvas. When the blue circle turns yellow, draw a connection between the enterprise application and component.


Policy Set

A policy set is a component used to define quality of service (QoS) policies. It is a collection of assertions about how services are defined, which can be used to simplify security configurations.

For more information about policy sets in IBM WAS, see the WAS Information Center that is linked to from the Related information section. Refer to the topics: "Managing policy sets using the administrative console" and "Exporting policy sets using the administrative console". The required Policy Set File can be retrieved by using the steps in the "Exporting policy sets using the administrative console" topic.

The following is a required attribute for a policy set component:


Connections


Incoming connectable components

Component name Description Connection properties
Enterprise application (WAS) An enterprise application (WAS) cloud component that represents an execution service for Java Platform, Enterprise Edition (Java EE) enterprise archive (EAR files).

  • Provider policy set binding
  • Service name
  • Binding file
  • Key store
  • Trust store (encryption)
  • Trust store (digital signature)

Web application (WAS) A web application (WAS) cloud component that represents an execution service for Java EE web archive (WAR files).

  • Provider policy set binding
  • Service name
  • Binding file
  • Key store
  • Trust store (encryption)
  • Trust store (digital signature)

To make a connection between a component and the policy set, hover over the blue circle on the enterprise application component part on the canvas. When the blue circle turns yellow, draw a connection between the policy set and component.


Web application component

The web application component represents an execution service for the Java Platform, Enterprise Edition (Java EE) web archive (WAR) files.

The following is a required attribute for a web application:


Policies


Policy components for web applications

Policy name Description
Routing policy (web, enterprise, or OSGi enterprise bundle archive (EBA) application) Routing policy for a web application, enterprise application, or an OSGi EBA application.
Log policy (web or enterprise application) A policy to specify configuration for log files.
JVM policy (web or enterprise application) A policy to control features of the underlying Java virtual machine (JVM).
Scaling policy (web or enterprise application) Scaling is a run time capability to automatically scale your application platform as the load changes. A scaling policy component defines this capability and the conditions under which scaling activities occur for your application.


Connections


Incoming connectable components

Component name Description
Enterprise application (WAS) An enterprise application (such as a WAS application) cloud component that represents an execution service for Java EE enterprise applications (EAR) files.
Web application (WAS) A web application cloud component that represents an execution service for Java EE web applications (WAR) files.


Outgoing connectable components

Component name Description Connection
Topic (WebSphere MQ) A topic that represents a message destination on a IBM WebSphere MQ messaging service through which messages are published and subscribed.

If you purchased and enabled the Messaging Extension for Web Application Pattern pattern type, you can connect to either an external WebSphere MQ messaging service or a WebSphere MQ messaging service deployed by using the Messaging Extension for Web Application Pattern.

  • JNDI name
  • Resource environment references
  • Message destination references

Additional archive file An additional archive file component for your primary archive.
Existing messaging service (WebSphere MQ) A messaging service that represents a connection to an external messaging system such as WebSphere MQ.

  • JNDI name of the Java Message Service (JMS) connection factory
  • Resource references of the JMS connection factory
  • Client ID

Policy set A component used to define quality of service policies.
Existing database (Oracle) An existing Oracle database component that represents a connection to an existing Oracle database instance that runs remotely outside of the cloud. The configuration properties allow a connection to be made to the remote Oracle database.
Connect Out A component used to open the firewall for outbound TCP connections from a web or enterprise application to a specified host and port.
Connect In A component used to open the firewall for inbound TCP connections from a specified address or range of addresses, to a specified port in the target application component.
Database (DB2) A database (DB2) component that represents a pattern-deployed database service.

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source
  • Minimum connections
  • Maximum connections
  • Connection timeout

Existing database (DB2) An existing DB2 database component that represents a connection to a remote DB2 database instance that runs remotely outside of the cloud. The configuration properties allow a connection to be made to the remote DB2 database.
Existing database (Informix) An existing Informix database component that represents a connection to a remote Informix database instance that runs remotely outside of the cloud. The configuration properties allow a connection to be made to the remote Informix database.
Existing CICS Transaction Gateway (CTG) An existing CTG component that represents a connection to an existing CTG instance that runs remotely outside of the cloud. The configuration properties allow a connection to be made to the CTG.
Existing Information Management Systems (IMS) database Existing IMS database system.
Existing user registry (Tivoli Directory Server) An existing user registry (such as Lightweight Directory Access Protocol (LDAP)) cloud component that represents a pattern-deployed LDAP service that can be deployed by itself, or attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container-managed security.

  • User filter
  • Group filter
  • Role name
  • User role mapping
  • Group role mapping
  • Special subject mapping

Existing user registry (Microsoft Active Directory) An existing user registry (such as LDAP) that represents an existing LDAP service that can be attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container managed security.
User registry (Tivoli Directory Server) A user registry (such as Tivoli Directory Server) cloud component that represents a pattern-deployed LDAP service that can be deployed by itself or attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container-managed security.
Existing IMS Transaction Manager Existing IMS TM
Enterprise application (WAS) An enterprise application (such as a WAS application) cloud component that represents an execution service for Java EE enterprise applications (EAR files).
Web application (WAS) A web application (such as a WAS application) cloud component that represents an execution service for WAR files.
Existing web service provider endpoint A web service provider that is provided by a remote server.
Queue (WebSphere MQ) A message queue on a WebSphere MQ messaging service through which messages are sent and received.

If you purchased and enabled the Messaging Extension for Web Application Pattern pattern type, you can connect to either an external WebSphere MQ messaging service or a WebSphere MQ messaging service deployed by using the Messaging Extension for Web Application Pattern.

  • JNDI name
  • Resource environment references
  • Message destination references

Use the property panel to upload the WAR files. You can also specify a context root. To make associations with other services, create a link to the corresponding cloud component. Currently, support is limited to one database and one user registry connection. A high availability (HA) policy object might be attached to specify an HA pattern.

In addition to uploading your WAR file, you can upload more files, such as a compressed file that contains configuration details or other information. When the WebSphere process starts, the compressed file is extracted to a directory and the icmp.external.directory system property is set. If you attach an HA policy to the web application component, each virtual machine contains a copy of the compressed file, and any updates that are made to the file or directory on one virtual machine are not reflected in the copy of the file on another virtual machine.

By default, the application is available at http://{ip_address}/{context_root}, where:

WAS Information Center


Database components


Database Studio web console

The Database Studio web console component is a database tool include with the IBM Database Patterns. This plug-in component is not available on the Virtual Application Builder unless you accept the license for the IBM Database Patterns. The following are the attributes for a component:

To make a connection between an application component and the database, hover over the blue circle on the Data Studio web console component part on the canvas. When the blue circle turns yellow, draw a connection between the Data Studio web console and the application component.


Database (DB2)

The DB2 database component represents a pattern-deployed database service. The following are the attributes for a DB2 database component:


Connections


Incoming connectable components

Component name Description
Web application (WAS) A web application cloud component represents an execution service for WAR files.
Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).
OSGi application (WAS) OSGi application on WAS.

To make a connection between an application component and the database, hover over the blue circle on the DB2 database component part on the canvas. When the blue circle turns yellow, draw a connection between the DB2 database and the application component.

The application is assumed to use JNDI settings to locate the data source. Specify the JNDI name in the link property panel. During deployment, the JNDI name is set to the corresponding data source, and the name must match the name that is coded into the application.


Existing database (DB2)

An existing DB2 database component represents a connection to a remote DB2 database instance running remotely outside of the cloud infrastructure. The configuration properties allow a connection to be made to the remote DB2 database. The following are the attributes for a remote DB2 database component:


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java EE) web applications (WAR files).

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source

OSGi application (WAS) OSGi application on WAS.

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source

To make a connection between an application component and the remote database, hover over the blue circle on the DB2 remote database component part on the canvas. When the blue circle turns yellow, draw a connection between the DB2 remote database and the application component.

The application is assumed to use JNDI settings to locate the data source. Specify the JNDI name in the link property panel. During deployment, the JNDI name is set to the corresponding data source, and the name must match the name that is coded into the application.


Existing database (Informix)

An existing Informix database component represents a connection to a remote Informix database running remotely outside of the cloud infrastructure. The configuration properties allow a connection to be made to the remote Informix database. The following are the attributes for a remote Informix database component:


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java EE) web applications (WAR files).

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source

OSGi application (WAS) OSGi application on WAS.

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source

To make a connection between an application component and the remote database, hover over the blue circle on the Informix remote database component part on the canvas. When the blue circle turns yellow, draw a connection between the Informix remote database and the application component.

The application is assumed to use JNDI settings to locate the data source. Specify the JNDI name in the link property panel. During deployment, the JNDI name is set to the corresponding data source, and the name must match the name that is coded into the application.


Existing database (Oracle)

An existing Oracle database component represents a connection to an existing Oracle database instance running remotely outside of the cloud. The configuration properties allow a connection to be made to the remote Oracle database. The following are the attributes for an existing Oracle database component:


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java EE) web archive (WAR files).

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise archive (EAR files).

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source

OSGi application (WAS) OSGi application on WAS.

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source

To make a connection between an application component and the existing Oracle database, hover over the blue circle on the existing Oracle database component part on the canvas. When the blue circle turns yellow, draw a connection between the existing Oracle database and the application component.

The application is assumed to use JNDI settings to locate the data source. Specify the JNDI name in the link property panel. During deployment, the JNDI name is set to the corresponding data source, and the name must match the name that is coded into the application.


Existing IMS database

An Information Management Systems Database IMS. DB component represents a connection to an IMS database instance running remotely outside of the cloud infrastructure. The configuration properties allow a connection to be made to the IMS DB system. The following are the attributes for an IMS database component:

The following are optional properties:


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java EE) web applications (WAR files).

  • JNDI name of the data source
  • Resource references of the data source

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).

  • JNDI name of the data source
  • Resource references of the data source

To make a connection between an application component and an existing IMS database, hover over the blue circle on the IMS database component part on the canvas. When the blue circle turns yellow, draw a connection between the IMS database and the application component. The application can use JNDI or Resource references settings to locate the data source. Specify the JNDI name or the Resource Reference in the link property panel. The name must match the name that is coded into the application.


Messaging components


Existing Messaging Service (WebSphere MQ)

An existing message service component represents a connection to an external messaging system such as WebSphere MQ. The presence of a messaging system allows an enterprise application running on WAS to connect to the external messaging resource, such as WebSphere MQ. The following are attributes for the messaging service:


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for WAR files.

  • Java Naming Directory Interface (JNDI) name of JMS connection factory
  • Resource references of JMS connection factory
  • Client ID

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).

  • JNDI Name of JMS connection factory
  • Resource references of JMS connection factory
  • Client ID

OSGi application (WAS) OSGi application on WAS

  • JNDI Name of JMS connection factory
  • Resource references of JMS connection factory
  • Client ID

The application is assumed to use JNDI settings to locate the topic. Specify the JNDI name in the link property panel, either as a hardcoded JNDI name or by selecting the relevant application resource-references from the property panel list box. During deployment, the JNDI name is set to the corresponding topic, and mapped, if required, to the relevant resource reference in the application. A warning displays if you do not specify at least one of the connection properties.

To make a connection between an application component and the messaging service, hover over the blue circle on the messaging service component part on the canvas. When the blue circle turns yellow, draw a connection between the messaging service and application component.

The messaging service component represents a connection to an instance of IBM WebSphere MQ. The component can be configured to create a connection to the IBM WebSphere MQ installation. When you click the messaging service component on the Virtual Application Builder canvas, a properties panel displays.


Topic

A topic represents a message destination on a WebSphere MQ messaging service through which messages are published and subscribed.

If you purchased and enabled the Messaging Extension for Web Application Pattern pattern type, you can connect to either an external WebSphere MQ messaging service or a WebSphere MQ messaging service deployed by using the Messaging Extension for Web Application Pattern.

The following is a required attribute for topic:


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java EE) web applications (WAR files).

  • JNDI name
  • Resource environment references
  • Message destination references

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).

  • JNDI name
  • Resource environment references
  • Message destination references

OSGi application (WAS) OSGi application on WAS

  • JNDI name
  • Resource environment references
  • Message destination references

The application is assumed to use JNDI settings to locate the topic. Specify the JNDI name in the link property panel, either as a hardcoded JNDI name or by selecting the relevant application resource-references from the property panel list box. During deployment, the JNDI name is set to the corresponding topic, and mapped, if required, to the relevant resource reference in the application. A warning displays if you do not specify at least one of the connection properties.

The required attributes for Link to WebSphere MQ topic are as follows:

To make a connection between a component and the messaging topic, hover over the blue circle on the topic component part on the canvas. When the blue circle turns yellow, draw a connection between the topic and component.


Queue

A message queue is a message queue on a WebSphere MQ service from which messages are sent and received.

The following is a required attribute for a queue:

If you purchased and enabled the Messaging Extension for Web Application Pattern pattern type, you can connect to either an external WebSphere MQ messaging service or a WebSphere MQ messaging service deployed by using the Messaging Extension for Web Application Pattern.


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java EE) web applications (WAR files).

  • JNDI name
  • Resource environment references
  • Message destination references

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).

  • JNDI name
  • Resource environment references
  • Message destination references

OSGi application (WAS) OSGi application on WAS

  • JNDI name
  • Resource environment references
  • Message destination references

The application is assumed to use JNDI settings to locate the queue. Specify the JNDI name in the link property panel, either as a hardcoded JNDI name or by selecting the relevant application resource-references from the property panel list box. During deployment, the JNDI name is set to the corresponding queue, and mapped if required to the relevant resource reference in the application. A warning displays if you do not specify at least one of the connection properties.

The required attributes for Link to WebSphere MQ queue are as follows:

To make a connection between a component and the messaging queue, hover over the blue circle on the queue component part on the canvas. When the blue circle turns yellow, draw a connection between the queue and component.


OSGi components

The OSGi components available as parts for the virtual application pattern are the OSGi application and the external OSGi bundle repository.


Existing OSGi Bundle Repository (WAS)

This component provides the URL of an existing WAS OSGi bundle repository. The following are the attributes for the external OSGi bundle repository:


Connections


Incoming connectable components

Component name Description
OSGi application (WAS) OSGi application on WAS

To make a connection between an OSGi application component and the external OSGi bundle repository, hover over the blue circle on the external OSGi bundle repository component part on the canvas. When the blue circle turns yellow, draw a connection between the external OSGi bundle repository and OSGi application component.


OSGi Application (WAS)

This component represents the OSGi application on WAS.

The following are attributes for the OSGi application component:


Connections


Incoming connectable components

Component name Description
Scaling policy (web or enterprise application) Scaling is a workload pattern runtime capability to automatically scale your application platform as the load changes. A scaling policy component defines this capability and the conditions under which scaling activities are performed for your application.
Routing policy (web, enterprise, or OSGi EBA application) Specifies a routing policy for a web application, enterprise application, or an OSGi EBA application.
Log policy (web or enterprise application) A policy to specify configuration for log file records.
JVM policy (web or enterprise application) A policy to control features of the underlying Java. (JVM).


Outgoing connectable components

Component name Description Connection
Topic (WebSphere MQ) A topic represents a message destination on a IBM WebSphere MQ messaging service through which messages are published and subscribed.

If you purchased and enabled the Messaging Extension for Web Application Pattern pattern type, you can connect to either an external WebSphere MQ messaging service or a WebSphere MQ messaging service deployed by using the Messaging Extension for Web Application Pattern.

  • JNDI name
  • Resource environment references
  • Message destination references

Existing Messaging service (WebSphere MQ) An existing messaging service represents a connection to an external messaging system such as WebSphere MQ.

  • JNDI name of the Java Message Service (JMS) connection factory
  • Resource references of the JMS connection factory
  • Client ID

Existing Database (Oracle) An existing Oracle database component represents a connection to an existing Oracle database instance running remotely outside of the cloud. The configuration properties allow a connection to be made to the remote Oracle database.
Connect Out A component used to open the firewall for outbound TCP connections from a web or enterprise application to a specified host and port.
Connect In A component used to open the firewall for inbound TCP connections from a specified address or range of addresses, to a specified port in the target application component.
Connect In and Connect Out A component used to open ports in the firewall for multiple inbound or outbound TCP connections between servers and a web, enterprise, or Java application.
Database (DB2) A database (DB2) component that represents a pattern-deployed database service.

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source
  • Minimum connections
  • Maximum connections
  • Connection timeout

Existing Database (DB2) An existing DB2 database component represents a connection to a remote DB2 database instance running remotely outside of the cloud. The configuration properties allow a connection to be made to the remote DB2 database.

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source

Existing Database (Informix) An existing Informix database component represents a connection to a remote Informix database instance running remotely outside of the cloud. The configuration properties allow a connection to be made to the remote Informix database.

  • JNDI name of the data source
  • Resource references of the data source
  • Non-transactional data source

Existing CICS Transaction Gateway (TG) An existing CICS TG component represents an existing connection to a CICSTG instance running remotely outside of the cloud. The configuration properties allow a connection to be made to the CICS TG.

  • JNDI name of the CICS TG resource

Existing User Registry (Tivoli Directory Server) An existing user registry (Tivoli Directory Server) cloud component represents an existing LDAP service that can be deployed by itself or attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container-managed security.

  • User filter
  • Group filter
  • Role name
  • User role mapping
  • Group role mapping
  • Special subject mapping

Existing User Registry (Microsoft Active Directory) An existing user registry (LDAP) cloud component represents an existing LDAP service that can be attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container-managed security.
User Registry (Tivoli Directory Server) A user registry (Tivoli Directory Server) cloud component represents a pattern-deployed LDAP service that can be deployed by itself or attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container-managed security.
Existing OSGi Bundle Repository The URL of an existing OSGi bundle repository.
Queue (WebSphere MQ) A message queue on a WebSphere MQ messaging service through which messages are sent and received.

If you purchased and enabled the Messaging Extension for Web Application Pattern pattern type, you can connect to either an external WebSphere MQ messaging service or a WebSphere MQ messaging service deployed by using the Messaging Extension for Web Application Pattern.

  • JNDI name
  • Resource environment references
  • Message destination references

You can upload an .eba file to replace an OSGi application in the Virtual Application Console, but you cannot rename the archive as a part of the update.

To make a connection between a component and the OSGi application, hover over the blue circle on the OSGi application component part on the canvas. When the blue circle turns yellow, draw a connection between the OSGi application repository and component.


Transaction processing components

There are several transaction processing components to choose from when you build a virtual application pattern.


Existing CICS Transaction Gateway

An existing CICS Transaction Gateway (TG) component represents a connection to an existing CICS TG instance running remotely outside of the cloud. The configuration properties allow a connection to be made to the CICS Transaction Gateway.

You must install a CICS TG resource adapter on IBM PureApplication System W1500 to be able to connect and use a CICS TG from within your cloud environment.

The following are attributes for the OSGi application component:

You must specify the connection URL that the resource adapter uses to communicate with CICS TG in the form protocol://address, and specify the port number for which CICS TG is listening. The other fields in the properties panel are optional. If you configured SSL on CICS TG, you must also enter the name of the SSL keyring file and the SSL keyring password that you configured. Enter the full path name to the SSL keyring file in the SSL keyring field, for example, /mykeys/jsse/keystore.jks.

For more details on the properties panel settings, view the help by selecting the help icon on the properties panel.


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java EE) web applications (WAR files).

  • JNDI Name of the JCA Connection Factory
  • Maximum number of connections to the CICS Transaction Gateway

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).

  • JNDI Name of the JCA Connection Factory
  • Maximum number of connections to the CICS Transaction Gateway

OSGi application (WAS) OSGi application on WAS

  • JNDI Name of the JCA Connection Factory
  • Maximum number of connections to the CICS Transaction Gateway


Existing IMS Transaction Manager

An existing Information Management Systems Transaction Manager (IMS. TM) component provides an enterprise or web application running on WAS to connect to and submit transactions to an existing IMS system running remotely outside of the cloud.

The configuration properties allow a connection to be made to the IMS TM system. The following are the required properties:

The following are optional properties:


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for WAR files.

  • JNDI Name of the JCA Connection Factory, or
  • Resource references mapping
  • Maximum number of connections to IMS TM
  • Connection timeout

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).

  • JNDI Name of the JCA Connection Factory, or
  • Resource references mapping
  • Maximum number of connections to IMS TM
  • Connection timeout

OSGi application (WAS) OSGi application on WAS

  • JNDI Name of the JCA Connection Factory, or
  • Resource references mapping
  • Maximum number of connections to IMS TM
  • Connection timeout


Install the CICS resource adapter

Before you can use the CICS Transaction Gateway (CICS TG) in the IBM PureApplication System W1500, you must install a CICS TG resource adapter. You can use the ECI adapter, cicseci.rar, or the ECI adapter with two-phase commit support, cicseciXA.rar. PureApplication System does not provide EPI support. The resource adapters are specific to your release of CICS TG and the one you use depends on the platform that you are using and whether you require two-phase or single-phase commit. For more information about CICS TG resource adapters, see the Related information section.

To install a CICS TG resource adapter, log on to PureApplication System as an administrator. Upload the CICS TG resource adapter for your CICS TG installation. You can now use, and configure a CICS TG component.

  1. Click Cloud > System plug-ins. A configuration dialog box displays.
  2. Browse for the resource adapter.
  3. Click OK.

Add the CICS TG component to a virtual application pattern.

Use the ECI resource adapters


User Registry components

There are several application components to choose from when you build a virtual application pattern.


Existing User Registry (IBM Tivoli Directory Server)

An existing user registry cloud component represents an existing LDAP service that can be attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container-managed security. The following are attributes for the user registry component:


Default settings

Tivoli Directory Server is registered to the federated repository in WAS by using Virtual Member Manager (VMM) with the following settings:


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java. EE) web applications (WAR files).

  • User filter
  • Group filter
  • Role name
  • User role mapping
  • Group role mapping
  • Special subject mapping

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).

  • User filter
  • Group filter
  • Role name
  • User role mapping
  • Group role mapping
  • Special subject mapping

OSGi application (WAS) OSGi application on WAS

  • User filter
  • Group filter
  • Role name
  • User role mapping
  • Group role mapping
  • Special subject mapping

To make a connection between an application component and an existing Tivoli Directory Server user registry, hover over the blue circle on the existing user registry component part on the canvas. When the blue circle turns yellow, draw a connection between the user registry and component.

The current implementation supports a one-time upload of users and groups in an LDAP Data Interchange Format (LDIF) file, and applications are currently limited to enterprise applications. Within the application, the roles are defined in the web.xml file. Bindings of roles to users and groups are defined in the META-INF/ibm-application-bnd.xml file. Bind the roles to group for ease of management.

The following examples illustrate the three metadata files required to set up an enterprise application with the user registry component.

The LDIF file defines the users and groups for the application. user2 is in the group1 group.

dn: o=acme,c=us
objectclass: organization objectclass: top
o: ACME
 
dn: cn=user2,o=acme,c=us
objectclass: inetOrgPerson
objectclass: organizationalPerson
objectclass: person
objectclass: top
objectclass: ePerson
cn: user2
userpassword: user2
initials: user2
sn: user2
uid: user2
 
dn: cn=group1,o=acme,c=us
objectclass: groupOfNames
objectclass: top
cn: manager
member: cn=user2,o=acme,c=us 

The web.xml file defines the roles and security policy for the application. role1 can access only the protected resources.

<?xml version="1.0" encoding="UTF-8"?>

<web-app id="WebApp_ID" version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" 
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">

    <display-name>HitCountWeb</display-name>
    <servlet>
        <description></description>
        <display-name>HitCountServlet</display-name>
        <servlet-name>HitCountServlet</servlet-name>
        <servlet-class>com.ibm.samples.hitcount.HitCountServlet</servlet-class>
    </servlet>
    <servlet-mapping>
        <servlet-name>HitCountServlet</servlet-name>
        <url-pattern>/*</url-pattern>
    </servlet-mapping>
    <security-constraint>
        <display-name>AllAuthenticated</display-name>
        <web-resource-collection>
            <web-resource-name>All</web-resource-name>
            <url-pattern>/*</url-pattern>
            <http-method>GET</http-method>
            <http-method>PUT</http-method>
            <http-method>HEAD</http-method>
            <http-method>TRACE</http-method>
            <http-method>POST</http-method>
            <http-method>DELETE</http-method>
            <http-method>OPTIONS</http-method>
        </web-resource-collection>
        <auth-constraint>
            <description>Auto generated Authorization Constraint</description>
            <role-name>role1</role-name>
        </auth-constraint>
        <user-data-constraint>
            <transport-guarantee>CONFIDENTIAL</transport-guarantee>
        </user-data-constraint>
        </security-constraint>
    <login-config>
        <auth-method>FORM</auth-method>
        <realm-name></realm-name>
        <form-login-config>
            <form-login-page>/login.jsp</form-login-page>
            <form-error-page>/login.jsp?error=Invalid+username+or+password</form-error-page>
        </form-login-config>
    </login-config>
    <security-role>
        <description>allowed group</description>
        <role-name>role1</role-name>
    </security-role>
    </web-app>

The binding file binds the group1 group to the role1 role.

<?xml version="1.0" encoding="UTF-8"?>

<application-bnd xmlns="http://websphere.ibm.com/xml/ns/javaee"
                 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                 xsi:schemaLocation="http://websphere.ibm.com/xml/ns/javaee http://websphere.ibm.com/xml/ns/javaee/ibm-application-bnd_1_0.xsd"
                 version="1.0">
 
    <security-role name="role1">
        <group name="group1" />
    </security-role>
</application-bnd>


Existing User Registry (Microsoft Active Directory)

An existing user registry cloud component represents an existing LDAP service that can be attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container-managed security. The following are attributes for the user registry component:


Default settings

Microsoft Active Directory Server is registered to the federated repository in WAS by using Virtual Member Manager (VMM) with the following settings:


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java EE) web archive (WAR files).

  • Role name
  • User role mapping
  • Group role mapping
  • Mapping special subjects

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise archive (EAR files).

  • Role name
  • User role mapping
  • Group role mapping
  • Mapping special subjects

OSGi application (WAS) OSGi application on WAS

  • Role name
  • User role mapping
  • Group role mapping
  • Mapping special subjects

To make a connection between an application component and an existing Microsoft Active Directory, hover over the blue circle on the user registry component part on the canvas. When the blue circle turns yellow, draw a connection between the user registry and component. The current implementation supports a one-time upload of users and groups in an LDAP Data Interchange Format (LDIF) file, and applications are currently limited to enterprise applications. Within the application, the roles are defined in the web.xml file. Bindings of roles to users and groups are defined in the META-INF/ibm-application-bnd.xml file. Bind the roles to group for ease of management.

The following examples illustrate the three metadata files required to set up an enterprise application with the user registry component.

The LDIF file defines the users and groups for the application. user2 is in the group1 group.

dn: o=acme,c=us
objectclass: organization objectclass: top
o: ACME
 
dn: cn=user2,o=acme,c=us
objectclass: inetOrgPerson
objectclass: organizationalPerson
objectclass: person
objectclass: top
objectclass: ePerson
cn: user2
userpassword: user2
initials: user2
sn: user2
uid: user2
 
dn: cn=group1,o=acme,c=us
objectclass: groupOfNames
objectclass: top
cn: manager
member: cn=user2,o=acme,c=us 

The web.xml file defines the roles and security policy for the application. role1 can access only the protected resources.

<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" 
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
            xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
    <display-name>HitCountWeb</display-name>
    <servlet>
        <description></description>
        <display-name>HitCountServlet</display-name>
        <servlet-name>HitCountServlet</servlet-name>
        <servlet-class>com.ibm.samples.hitcount.HitCountServlet</servlet-class>
    </servlet>
    <servlet-mapping>
        <servlet-name>HitCountServlet</servlet-name>
        <url-pattern>/*</url-pattern>
    </servlet-mapping>
    <security-constraint>
        <display-name>AllAuthenticated</display-name>
        <web-resource-collection>
            <web-resource-name>All</web-resource-name>
            <url-pattern>/*</url-pattern>
            <http-method>GET</http-method>
            <http-method>PUT</http-method>
            <http-method>HEAD</http-method>
            <http-method>TRACE</http-method>
            <http-method>POST</http-method>
            <http-method>DELETE</http-method>
            <http-method>OPTIONS</http-method>
            </web-resource-collection>
        <auth-constraint>
            <description>Auto generated Authorization Constraint</description>
            <role-name>role1</role-name>
        </auth-constraint>
        <user-data-constraint>
            <transport-guarantee>CONFIDENTIAL</transport-guarantee>
        </user-data-constraint>
    </security-constraint>
    <login-config>
        <auth-method>FORM</auth-method>
        <realm-name></realm-name>
        <form-login-config>
            <form-login-page>/login.jsp</form-login-page>
            <form-error-page>/login.jsp?error=Invalid+username+or+password</form-error-page>
        </form-login-config>
    </login-config>
    <security-role>
        <description>allowed group</description>
        <role-name>role1</role-name>
    </security-role>
</web-app>

The binding file binds the group1 group to the role1 role.

<?xml version="1.0" encoding="UTF-8"?>
<application-bnd xmlns="http://websphere.ibm.com/xml/ns/javaee"
                    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                    xsi:schemaLocation="http://websphere.ibm.com/xml/ns/javaee
                    http://websphere.ibm.com/xml/ns/javaee/ibm-application-bnd_1_0.xsd"
                    version="1.0">
 
    <security-role name="role1">
        <group name="group1" />
    </security-role>
</application-bnd>


User Registry (Tivoli Directory Server)

A user registry (Tivoli Directory Server) cloud component represents a pattern-deployed LDAP service that can be deployed by itself or attached to a web application component or an enterprise application component. The LDAP service provides a user registry for container-managed security. The following are attributes for the user registry component:


Default settings

Tivoli Directory Server is registered to the federated repository in WAS by using Virtual Member Manager (VMM) with the following settings:


Connections


Incoming connectable components

Component name Description Connection properties
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java EE) web applications (WAR files).

  • User filter
  • Group filter
  • Role name
  • User role mapping
  • Group role mapping
  • Special subject mapping

Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).

  • User filter
  • Group filter
  • Role name
  • User role mapping
  • Group role mapping
  • Special subject mapping

OSGi application (WAS) OSGi application on WAS

  • User filter
  • Group filter
  • Role name
  • User role mapping
  • Group role mapping
  • Special subject mapping

To make a connection between a component and the user registry, hover over the blue circle on the user registry component part on the canvas. When the blue circle turns yellow, draw a connection between the user registry and component.

The current implementation supports a one-time upload of users and groups in an LDIF file, and applications are currently limited to enterprise applications. Within the application, the roles are defined in the web.xml file. Bindings of roles to users and groups are defined in the META-INF/ibm-application-bnd.xml file. Bind the roles to group for ease of management.

The following examples illustrate the three metadata files required to set up an enterprise application with the user registry component.

The LDIF file defines the users and groups for the application. user2 is in the group1 group.

dn: o=acme,c=us
objectclass: organization objectclass: top
o: ACME
 
dn: cn=user2,o=acme,c=us
objectclass: inetOrgPerson
objectclass: organizationalPerson
objectclass: person
objectclass: top
objectclass: ePerson
cn: user2
userpassword: user2
initials: user2
sn: user2
uid: user2
 
dn: cn=group1,o=acme,c=us
objectclass: groupOfNames
objectclass: top
cn: manager
member: cn=user2,o=acme,c=us 

The web.xml file defines the roles and security policy for the application. role1 can access only the protected resources.

<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" 
            version="2.5" 
            xmlns="http://java.sun.com/xml/ns/javaee" 
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
            xsi:schemaLocation="http://java.sun.com
            /xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
    <display-name>HitCountWeb</display-name>
    <servlet>
        <description></description>
        <display-name>HitCountServlet</display-name>
        <servlet-name>HitCountServlet</servlet-name>
        <servlet-class>com.ibm.samples.hitcount.HitCountServlet</servlet-class>
    </servlet>
    <servlet-mapping>
        <servlet-name>HitCountServlet</servlet-name>
        <url-pattern>/*</url-pattern>
    </servlet-mapping>
    <security-constraint>
        <display-name>AllAuthenticated</display-name>
        <web-resource-collection>
            <web-resource-name>All</web-resource-name>
            <url-pattern>/*</url-pattern>
            <http-method>GET</http-method>
            <http-method>PUT</http-method>
            <http-method>HEAD</http-method>
            <http-method>TRACE</http-method>
            <http-method>POST</http-method>
            <http-method>DELETE</http-method>
            <http-method>OPTIONS</http-method>
            </web-resource-collection>
        <auth-constraint>
            <description>Auto generated Authorization Constraint</description>
            <role-name>role1</role-name>
        </auth-constraint>
        <user-data-constraint>
            <transport-guarantee>CONFIDENTIAL</transport-guarantee>
        </user-data-constraint>
    </security-constraint>
    <login-config>
        <auth-method>FORM</auth-method>
        <realm-name></realm-name>
        <form-login-config>
            <form-login-page>/login.jsp</form-login-page>
            <form-error-page>/login.jsp?error=Invalid+username+or+password</form-error-page>
        </form-login-config>
    </login-config>
    <security-role>
        <description>allowed group</description>
        <role-name>role1</role-name>
    </security-role>
</web-app>

The binding file binds the group1 group to the role1 role.

<?xml version="1.0" encoding="UTF-8"?>
<application-bnd xmlns="http://websphere.ibm.com/xml/ns/javaee"
                    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                    xsi:schemaLocation="http://websphere.ibm.com/xml/ns/javaee
                    http://websphere.ibm.com/xml/ns/javaee/ibm-application-bnd_1_0.xsd"
                    version="1.0">
 
    <security-role name="role1">
        <group name="group1" />
    </security-role>

</application-bnd>

Manage the IBM Directory schema


Other components

There are other components to choose from when you build a virtual application pattern.


Connect Out

This component is used to open the firewall for outbound TCP connections from a web or enterprise application to a specified host and port.

The following are the attributes for the component:


Connections


Examples of connectable components

Component name Description
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java. EE) web applications (WAR files).
Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).
OSGi application (WAS) OSGi application on WAS.

To make a connection between an application component and the Connect Out component, hover over the blue circle on the application component. When the blue circle turns yellow, draw a connection from application component to the Connect Out component.


Connect In

This component is used to open the firewall for inbound TCP connections from a specified address or range of addresses, to a specified port in the target application component.

The following are the attributes for the component:


Connections


Examples of connectable components

Component name Description
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java EE) web applications (WAR files).
Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).
OSGi application (WAS) OSGi application on WAS.

To make a connection between the Connect In component and an application component, hover over the blue circle on the Connect In component. When the blue circle turns yellow, draw a connection from the Connect In component to the application component.


Policies

There are several policies to choose from when you build a virtual application pattern.

You can apply a policy globally at the application level, or apply it to a specific component that supports the policy. When you apply a policy globally, it is applied to all components in the virtual application pattern that support it. If you apply a policy to a specific component and also apply it to the whole virtual application pattern, the configuration of the component-specific policy overrides the application level policy.


Scaling policy

Scaling is a Virtual Application Builder runtime capability to automatically scale your application platform as the load changes. A scaling policy component defines this capability and the conditions for CPU, memory, response time, and database connections, under which scaling activities are performed for your application.

The following are the attributes for a scaling policy:


Connections


Outgoing connectable components

Component name Description
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java. EE) web applications (WAR files).
Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).
OSGi application (WAS) OSGi application on WAS.


Routing policy

You can apply a routing policy to the application component parts of the virtual application pattern.

Note: The routing policy is automatically applied to a web application when a proxy shared service is running in the same cloud group it is deploying into. Otherwise, the routing policy is not automatically added to the virtual application.

The following are the attributes for a routing policy:

When elastic load balancing (ELB) is enabled, the combination of the context root, context root prefix, and virtual host name must be unique to successfully deploy multiple virtual application instances from a virtual application pattern. If you do not manually add a routing policy to a virtual application pattern, the autowiring capability of the elastic load balancing service automatically generates a unique prefix for each deployment.

When you manually add a routing policy to a virtual application pattern, the context prefix is optional. If you do not specify a context root ID, the virtual host name and context root are reserved by the ELB service and if you try to deploy another virtual application instance with the same values, an error message is displayed to indicate that there is a reservation conflict. Stopping the virtual application instance that originally used these values does not release the reservation: you must delete the original virtual application instance to enable another deployment to use the same virtual host name and context root or specify a different a context root before you deploy a new virtual application instance.

If the context root is defined in the enterprise archive (EAR) file, then the ELB service uses this definition. In this case, the contextRoot cannot be defined as / because it might conflict with other virtual applications. Do one of the following actions:


Connections


Outgoing connectable components

Component name Description
Web application (WAS) A web application cloud component represents an execution service for WAR files.
Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).
OSGi application (WAS) OSGi application on WAS.


Log policy

A log policy can be added to your application component part to specify configurations for log records.

The following are the attributes for a log policy:


Connections


Outgoing connectable components

Component name Description
Web application (WAS) A web application cloud component represents an execution service for Java Platform, Enterprise Edition (Java EE) web applications (WAR files).
Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).
OSGi application (WAS) OSGi application on WAS.


JVM policy

A Java virtual machine (JVM) policy controls the underlying JVM. You can attach the JVM policy to debug WAS processes by using an integrated development environment (IDE) like IBM Rational Application Developer for WebSphere.

The following are the attributes for a JVM policy:

When you enable debugging, the JVM is started in the debug mode and is listening on the specified port. A debugger on any client machine can attach to the JVM by default. You can specify a client IP address or IP/netmask to restrict access to the JVM. A client IP address, such as 10.2.3.5, allows a specific client machine to debug. An IP/netmask, such as 10.2.3.5/255.255.0.0, allows any machine on the 10.2 network to attach to the JVM.


Connections


Outgoing connectable components

Component name Description
Web application (WAS) A web application cloud component represents an execution service for WAR files.
Enterprise application (WAS) An enterprise application (WAS) cloud component represents an execution service for Java EE enterprise applications (EAR files).
OSGi application (WAS) OSGi application on WAS.

For more information about using Rational Application Developer for WebSphere, see the Related information section.

You can optionally use the IBM Monitoring and Diagnostic Tools for Java - Health Center (Health Center) to assess the status of a running Java application. Health Center continuous monitoring provides information that helps you to identify and resolve problems with applications.

In PureApplication System, you can configure the IBM Monitoring and Diagnostic Tools for Java - Health Center by using the following attributes in the JVM policy:

For technical information about the IBM Monitoring and Diagnostic Tools for Java - Health Center, see the Related information section.


Interim fix policy

You can apply an interim fix policy to the virtual application to apply updates during deployment.

Upload emergency fixes to the catalog. When you add the iFix policy to a virtual application, any applicable fixes for the pattern type and plug-ins display in the list in the Interim fixes URL attribute for the policy. Click Select and select the fixes to install during deployment.

Interim fix policies are application level policies only and cannot be applied to a component. Currently, only WAS interim fixes for deployments based on IBM Web Application Pattern are supported at the component level. Use the Interim fixes URL attribute for the component to install a fix at the component level.

Debugging applications

IBM Monitoring and Diagnostic Tools for Java - Health Center Version 2.1


Manage virtual application pattern types

A virtual application pattern type is a collection of plug-ins that identify components, links, and policies, along with configuration files, which are packaged in a .tgz file. The virtual application patterns are used to build a virtual application that includes these components, links, and policies. You can use the workload console, the command line interface, or the REST API to complete this task. For the command line and REST API information, see the Related information section.

  1. Click Cloud > Pattern Types.

  2. You must accept the license agreement for each pattern type to use. To accept the pattern license agreement:

    1. Select a pattern type.
    2. In the License Agreement field, click Accept.
    3. After you read the license agreement, click Accept.
    4. Click Enable to change the pattern type status to Available.

    For detailed information about accepting the license agreement for specific pattern types, see the Web Application Pattern or IBM Database Patterns documentation.

    By default, you do not have to accept a license for the foundation pattern type, but enable this pattern type. You can use the Enable All action in the Pattern Types pane to enable the foundation pattern type when you accept the license for a pattern type.


Import pattern types

You can import a new pattern type into the system.

Restriction: To upload a file that is larger than 2 GB, you must either upload it from a remote system or by using the command-line interface. To upload a file from a local system, the size of the file must be smaller than 2 GB.

  1. Click Cloud > Pattern Types.

  2. Click the New icon on the toolbar.

  3. Specify the file details.
    • To upload a local file, click the Local tab. Click Browse to select the .tgz file that contains the pattern type.
    • To upload a remote file, click the Remote tab and specify the URL of the file. If prompted to log on to the remote site to access the file, specify the user name and password.

  4. Click OK.

Now you must accept the license agreement of the pattern type, configure plug-ins in this pattern type, and enable the pattern type to use it.

IBM PureSystems Centre


Delete pattern types

You can delete a pattern type from the system. You might not be able to delete a pattern type in the following cases:

You can use the workload console, the command line interface, or the REST API to complete this task. For the command line and REST API information, see the Related information section.

  1. Click Cloud > Pattern Types.
  2. Select the pattern and click the Delete icon on the toolbar.
  3. Click OK.


Upgrade pattern types

Periodic updates are available for pattern types. The pattern type updates are available on IBM Fix Central. To import an upgraded pattern type, you must be assigned the Workload resources administration role with permission to Manage workload resources (Full permission).You can use the workload console or the REST API to complete this task. For the REST API information, see the Related information section. Pattern types are packaged in a .tgz file, whether the delivery is in release, update or fix pack. For example, a web application pattern has the following format, where x.x.x.x is the release level:

If you download an update from Fix Central, and import the file into IBM PureApplication System W1500, the administrator must accept the license agreement, and make the version.release (VR) available. If you download a fix pack that includes updated pattern type and import the file into the system, the new pattern type license is already accepted and the pattern type is automatically available.

The following information shows how various users are impacted by pattern type updates:

After you download the fixes from Fix Central, you can use the command-line, REST API, or the console to import the new pattern type.

  1. Click Cloud > Pattern Types.
  2. Click the New icon on the toolbar.
  3. Click Browse to select the .tgz file to import as a pattern type.

IBM Fix Central


Upgrade deployed pattern types

There are several situations where you might want to apply updates to a pattern type to a deployed virtual application. For example, if you upgraded a pattern type or customized a plug-in that is associated with a pattern type, you might want to apply the changes to a virtual application based on the pattern type. You can use the workload console or the command line interface to complete this task. For the command line information, see the Related information section.

  1. Click Instances > Virtual Applications.

  2. Select the virtual application and click Upgrade on the toolbar.

  3. Click OK.
    The pattern type backs up data based on the configuration in the pattern type plug-ins, and then the upgrade changes are applied. The upgrade can take some time.

    • If the upgrade is successful, the Status field arrow turns green. Review the updated deployment and click Commit to upgrade.
    • If the upgrade fails, the backup data is restored and the virtual application is returned to its previous state.


View pattern types

The system includes a set of pattern types that you can use to create solution-specific virtual applications. You can view the pattern types from the workload console.

By default, there is not a license agreement to accept for the foundation pattern type. However, the foundation pattern type must be enabled before the other pattern types can be enabled. The foundation pattern is a prerequisite to using all other pattern types. You can use the workload console or the REST API to complete this task. For the REST API information, see the Related information section.

  1. Click Cloud > Pattern Types.

  2. Select a pattern type and view the details:

    Description

    A description of the pattern type.

    License agreement

    Indicates if the license agreement is accepted, if there is a license agreement associated with the pattern type.

    Status

    Status of the pattern type: Disabled or Available. To enable the pattern type, select Enable. You can either enable the current pattern type if no dependencies exist, or enable all of the pre-requirements, such as accepting licenses and updating statuses.

    Required

    Specifies any prerequisite patterns required.

    Plug-ins

    Click show me all plug-ins in this pattern type to view plug-ins associated with the pattern type. Plug-ins required for configuration are also listed.

    Dependency

    Lists pattern type dependencies.


View plug-ins in pattern types

Virtual application pattern types include a set of preinstalled system plug-ins. Use the workload console to view the system plug-ins that are associated with the pattern types. You can use the workload console or the REST API to complete this task. For the REST API information, see the Related information section.

  1. Click Cloud > Pattern Types.
  2. Under the pattern type to view, select the specific pattern type version to view details.
  3. Click show me all plug-ins in this pattern type.


Supported virtual application pattern types

Several virtual application pattern types are included with the product license.

The following table lists pattern types included with the product license. Pattern types that are not included in the product license require a separate license purchase.


Pattern types included in the product license

Pattern type Description
IBM Foundation Pattern A pattern type that provides shared services for deployed virtual applications such as monitoring and load balancing.
IBM Web Application Pattern A pattern type to build and deploy web applications.
IBM Database Patterns A pattern type to build and deploy database instances.
Application Pattern Type for Java. A pattern type to build and deploy Java applications.

You can manage pattern types in the workload console. You can view pattern types, view the system plug-ins, import a pattern type, accept license agreements, and remove pattern types.


IBM Database Patterns

IBM Database Patterns provides a standardized database pattern solution that can be reused to deploy and manage database instances in a cloud environment.


Get started with IBM Database Patterns

IBM Database Patterns employs pattern types to create and deploy databases in a Database-as-a-Service (DBaaS) cloud environment. Pattern types are standardized patterns that can be reused to deploy and manage resources.


Overview of IBM Database Patterns

IBM Database Patterns is a product extension used to build online DB2-based databases.

The IBM Database Patterns manages a DB2 database deployment. IBM PureApplication System W1500 plug-in APIs run within the workload pattern to support models, patterns, and automation. You can select the database requirements to support typical departmental-style applications for a workload pattern. After deploying a database, the IBM PureApplication System W1500 system determines the underlying topology configuration.

IBM Database Patterns workflow


Database application requirements

Verify that your hardware and software meet the minimum requirements before you use IBM Database Patterns.


Hardware requirements

Review the following hardware requirements for the VMware ESX cloud:


Software requirements

Review the following software requirements:


Accept licenses

You must complete license agreements and perform configuration tasks before you can use IBM Database Patterns. You must be logged in as an administrator to perform these tasks.

  1. Make Foundation Pattern available:

    1. Click Cloud > Pattern Types.
    2. Click Foundation Pattern Type from the menu on the left.
    3. Click Enable in the Status field.

  2. Make Database Patterns available:

    1. Click Cloud > Pattern Types.
    2. Click IBM Database Patterns 1.1.0.5 from the menu on the left.
    3. Click Enable in the Status field.

  3. Accept the license agreements for Transactional Database or Data Mart Patterns:

    1. Click Cloud > Pattern Types.
    2. Click IBM Transactional Database Pattern 1.1.0.5 or IBM Data Mart Pattern 1.1.0.5 from the menu on the left.
    3. Click Accept in the License Agreement field.
    4. Click Enable in the Status field.
    5. If you require both patterns, repeat the process for the option that is not accepted and enabled.

  4. Accept licenses for base images:

    1. Click Catalog > Virtual Images.
    2. For Linux-based systems, choose IBM OS Image for Red Hat Linux Systems.
    3. In the License Agreement field, click Accept.
    4. Click Cloud > Default Deploy Settings to verify that the images are added.


Configure IBM Tivoli Storage Manager

You must configure IBM Tivoli Storage Manager before you can automatically back up a database. You must be logged in as an administrator to perform this operation.

IBM Tivoli Storage Manager is a client-server software solution that is designed to provide centralized, automated data protection, including backup and recovery, archive and retrieval, and restoring data after a disaster. IBM PureApplication System W1500 assumes that a remote Tivoli Storage Manager server exists in the IT infrastructure and sends database backups to the Tivoli Storage Manager server.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Click Cloud > System Plug-ins.

  3. Select tsm from the list.

  4. Click the Configure icon.

  5. Enter values for the following fields. Use default values where they are supplied.

    TSM Server Address

    Specifies Tivoli Storage Manager server host name.

    TSM Server TCP/IP Port

    Specifies the TCP/IP port on which the Tivoli Storage Manager server is listening for inbound requests.

    TSM Server Administrator User

    Specifies the DB2 administrator user ID to be used for Tivoli Storage Manager access.

    TSM Server Administrator Password

    Password that is associated with the Tivoli Storage Manager Administrator user name attribute.

    Domain for DB2

    Policy domain for a DB2 backup

  6. Click OK.

  7. To disable Tivoli Storage Manager, repeat the preceding steps but do not enter a value for password. Do not delete the plugin to remove the configuration.

For changes to the Tivoli Storage Manager server connection information to take effect, the database must be manually reconfigured.

Install TSM

Configure TSM

Define a new policy demain


Reconfigure a database for IBM Tivoli Storage Manager

After the Tivoli Storage Manager server connection information is changed, any existing databases must be manually reconfigured by a DB2 administrator. To complete this task, you must log in to the target DB2 VM as an administrator, through SSH. A configuration is used by all subsequent deployments and must be accessible from all cloud groups. If the configured TSM server is not accessible, DB2 deployments will fail because of baseline backup failure.

  1. Update the Tivoli Storage Manager server address and the Tivoli Storage Manager server port in the following file:

    /opt/tivoli/tsm/client/api/bin64/dsm.sys

  2. If the Tivoli Storage Manager server port is not TCP 1500, open the firewall port ...:

      /0config/nodepkgs/common/scripts/firewall.sh open tcpout -dport port

  3. Connect to the Tivoli Storage Manager server from the Tivoli Storage Manager administrative command line, by using administrator access. Run the following command:

      register node node_name node_password domain=domain_name archdelete=no backdelete=no

    Where:

    node_name
    The value that is shown in /opt/tivoli/tsm/client/api/bin64/dsm.sys
    node_password
    The Tivoli Storage Manager password for the new node.
    domain_name
    Select a domain that is set up correctly on the server side of Tivoli Storage Manager by using the define domain db2domain command.

  4. Restart the DB2 instance instance_name.

  5. Update the database configuration with the Tivoli Storage Manager server changes ...:

      db2 update db cfg for database name using TSM_PASSWORD node password

Install TSM

Configure TSM

Define a new policy demain


Work with database instances

You can create and manage databases using IBM Database Patterns.

A database is a DB2 database deployed directly or from a predefined database pattern. A new database is provisioned by specifying provisioning options in a single step or by selecting and deploying from a database pattern. To access existing databases, click Instances > Databases.


Create databases in one step

You can create databases directly through a single procedure by using IBM Database Patterns.

You must accept licenses, and enable patterns before you can create a database.

If you have PureData System for Transactions integrated with PureApplication System and you want to create a DB2 pureScale database, you must have a DB2 pureScale instance deployed on your PureData System for Transactions system. When you create a database with a Source of Clone from a database image, deploy onto the same Platform and OS that was used in the source database image.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the new database page by using one of the following steps:
    • From the Welcome page in Working with databases click Deploy Database.
    • Click Instances > Databases. Click the New icon.

  3. In the dialog box, enter a value for each of the following fields:

    Database Name

    The name must be no more than eight characters, begin with a letter, and can contain an underscore (_) but no other special characters.

    Database Description

    Specify identifying details.

    Availability / Scalability

    This field is displayed if you have PureData System for Transactions integrated with PureApplication System.

    • Select Standard to deploy your database on the PureApplication System.
    • Select High to deploy your database on the PureData System for Transactions.

      You must have a DB2 pureScale instance deployed on the PureData System for Transactions before you can provision a DB2 pureScale database.

    Purpose

    This field is displayed if you are using a PureApplication System with no integration of PureData System for Transactions or you are using a PureData System for Transactions integrated with PureApplication System and selected Standard in the Availability / Scalability field.

    Select Production to indicate that the database is deployed in a live business environment.

    Source

    A default database workload standard

    This source option applies predefined database configurations to create a database.

    Apply database workload standards

    This source option uses a set of user-defined database configurations to create a database instance.

    from a database image

    This source option uses an existing database as a model for create a database instance.

    Follow the instructions on the linked page for the option you select and return to this page to complete this task.

    DB2 pureScale instance

    Select the instance where the database is deployed if you have PureData System for Transactions integrated with PureApplication System and selected High in the Availability / Scalability field.

    Complete the following fields only if you are using a PureApplication System with no integration of PureData System for Transactions or you are using a PureData System for Transactions integrated with PureApplication System and selected Standard in the Availability / Scalability field.

    IPv4 or IPv6

    Select one of these options based on your expected traffic.

    Profile and Cloud Group

    Enter values for the fields, if any, from the drop-down menus based on your user access rights and assignments.

    Advanced

    • Expand this heading
    • Enter an SSH key or click the Generate icon to generate an SSH key automatically.
    • Click Download and save the key to your local system

    An SSH key is required to access the virtual machine for certain administrative tasks even if the system loses connectivity or encounters problems.

  4. Click OK.

  5. The database that you are deploying appears in the list of deployed databases. Click the Refresh icon to update the status of this operation in the list. Click your database name and expand the History heading to see the steps of deployment.

A database instance is available when the status in the list on the left is "Deployed" and the Running icon is displayed.

Note: Deploying a database with these steps creates a database pattern with the same title in Patterns > Database Patterns. This allows you to create a similar database later if required.

Loading data into a database


Apply a default database workload standard

You can create database patterns or deploy databases by applying a default database workload standard by using IBM Database Patterns. Choose Apply a default database workload standard from the Source list when you create a database.

Workload standards are a set of predefined database configurations used as a provisioning approach for creating database patterns. When a workload standard is selected, a set of scripts runs to tune the operating system and instance configuration, create the database and accompanying objects and sometimes also load the initial data.

  1. Choose a workload standard from the list. The default workload standards available vary with your system configuration. Choose one of the following workload standards:

    Departmental Transactional

    Use the departmental transactional default workload standard primarily for online transaction processing. The departmental transactional workload standard is optimized for transactional applications. Transactional databases are designed for speed, simplicity, and volume.

    If you have PureData System for Transactions integrated with PureApplication System, this workload standard is available when you select Standard from the Availability / Scalability list. The Availability / Scalability list is available only when you have PureData System for Transactions integrated with PureApplication System.

    Data Mart

    Use the data mark default workload standard primarily for data warehousing. The data mart workload standard is optimized for analytics and reporting applications. Data mart databases are designed for flexibility and ease of access.

    If you have PureData System for Transactions integrated with PureApplication System, this workload standard is available when you select Standard from the Availability / Scalability list. The Availability / Scalability list is available only when you have PureData System for Transactions integrated with PureApplication System.

    OLTP

    Use the OLTP default workload standard primarily for highly available online transaction processing. The OLTP workload standard is optimized for transactional applications. OLTP databases are designed for speed, simplicity, and volume.

    This workload standard is available only if you have PureData System for Transactions integrated with PureApplication System and selected High from the Availability / Scalability list. The Availability / Scalability list is available only when you have PureData System for Transactions integrated with PureApplication System.

    You must have a DB2 pureScale instance deployed on the PureData System for Transactions before you create the database.

    Default user

    Enter the name of the user that is created with the database if you selected OLTP.

    This field does not apply to PureApplication System with no integration of PureData System for Transactions

    Password

    Enter the login details for this user if you selected OLTP. You can either specify a user that exists on the PureData System for Transactions or a new user. If you specify an existing user, the specified password must be the same as that user password on the PureData System for Transactions. If you specify a new user, that new user is created on the PureData System for Transactions and assigned the password you specify.

    This field does not apply to PureApplication System with no integration of PureData System for Transactions

    Database size (GB)

    Specify the size limit for user data in your database. The database size limit is 500 GB unless you have PureData System for Transactions integrated with PureApplication System and selected High from the Availability / Scalability list.

    Database compatibility mode

    DB2 mode is the default option. Choose Oracle mode to allow applications currently running on Oracle to run on DB2.

    Database version

    Specify the operating system to use on the database.

    Database level

    Specify the latest software update to use on the database.

  2. Optional: To select a schema file, click Browse, navigate to a schema file, and click Open in the File Upload window. A schema file is a SQL file (.ddl or .sql extension) used to determine the structure of a database. Schema file statements must be delimited by line breaks, semicolons or standard DB2 delimiters.

  3. Optional: Expand Advanced options and:

    1. Select an option from Territory. This value specifies the region that is associated with your database. This value determines time and date formats, and also determines the range of options for Code set.
    2. Select an option from Code set. This value specifies the set of characters that are permitted for storage in the database. The set of possible code set values depends on the value you choose for Territory.
    3. Select an option from Collating sequence. This value specifies how to sort non-Unicode databases. The set of possible collating sequences depends on the option you choose for Territory.

Choosing the code page, territory, and collation for your database

Choosing the code page, territory, and collation for your database


Apply database workload standards

You can create database patterns or deploy databases by applying a customized database workload standard using IBM Database Patterns. Choose Apply a database workload standard from the Source list when you create a database.

Workload standards are a set of predefined database configurations used as a provisioning approach for creating database patterns. You may also choose a customized workload standard that has user-defined tunings. When a workload standard is selected, a set of scripts runs to tune the operating system and instance configuration, create the database and accompanying objects and sometimes also load the initial data.

  1. Choose a workload standard with user-defined tunings from the list.

  2. Enter or specify a value for the following fields:

    Default user

    Enter the name of the user that is created with the database.

    This field does not apply to PureApplication System with no integration of PureData System for Transactions

    Password

    Enter the login details for this user.

    This field does not apply to PureApplication System with no integration of PureData System for Transactions

    Database size (GB)

    Specify the size limit for user data in your database. The database size limit is 500 GB unless you have PureData System for Transactions integrated with PureApplication System and selected High from the Availability / Scalability list.

    Database version

    Specify the operating system to use on the database.

    Database level

    Specify the latest software update to use on the database.


Clone from a database image

You can create database patterns or deploy databases by cloning from a saved database image using IBM Database Patterns.

Choose Clone from database image from the Source list when you create a database.

IBM Tivoli Storage Manager must be configured before the instance that hosts the database is created to use this function. There must be at least one pre-existing database image.

Clone is a provisioning approach that uses an existing database image as a model for creating database patterns. When an image is selected, the metadata stored during backup is retrieved. A new virtual machine is created with the same resource settings. The DB2 Restore command creates a new database with the same license and configurations. This cloned database then sits on the newly created virtual machine.

It is recommended that you use manually created images in preference to automatically created backups for this task. You can manually create a database image in the Database Service Console. Images created this way are labeled as .Manual. under Image Type in the list of images.

Choose a pre-existing database image from the list. There must be at least one previously created database image to use this option. Choose an image with the operating system appropriate for your database. All attributes come from the database image; when you clone from a saved image, you are not required to select a schema or any other options.


View databases

You can view deployed databases by using IBM Database Patterns.

Unless you have administrator privileges, you can view the properties of only the databases that you created.

If you have administrator privileges, you can view the properties of any database.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the databases page by using one of the following methods:

    • From the Welcome page in Working with databases, click View Database.
    • Click Instances > Databases.

  3. From the Databases Instances page, choose a database from the list on the left side: For the database that you select, the following information is displayed:

    Database ID

    Unique ID assigned to each database in the remote database server.

    Created by

    User login name that created the database.

    Database description

    The user-specified set of details about the database. This information is not interpreted by IBM Database Patterns but displayed along with the database as additional information.

    Availability / Scalability

    This field is displayed only if you have PureData System for Transactions integrated with PureApplication System.

    The following fields are displayed under one of the following conditions:

    • You have a PureApplication System with no integration of PureData System for Transactions.
    • You have PureData System for Transactions integrated with PureApplication System, and the Availability / Scalability value is Standard.

    Host

    IP address at which the remote database server is located.

    Port

    Specifies the TCP/IP port on which the remote database is listening for inbound requests.

    In cloud group

    Indicates the collection of hypervisors that are associated with the database. Click the name of the cloud group to access the cloud group management page.

    Application user

    A default user that you can use for application access to the database in DB2 to execute most operations. This field is visible only to the owner of the database.

    Application DBA

    A default user that you can use to manage and tune databases and manage privileges at the database level. This field is visible only to the owner of the database.

    JDBC URL

    The location that JDBC applications use to access the database for the indicated user. Toggle the Show and Hide buttons to display or conceal the location. This is visible only to the owner of the database.

    Password

    The password for the indicated user. Toggle the Show and Hide buttons to display or conceal the password. This field is visible only to the owner of the database.

    Database Level

    The latest software update currently running on the database.

    Status

    The current state of the database. The Log link accesses the viewer where you can view activity logs.

    History

    A list of status changes with a timestamp for each status change.

    The following fields are displayed when you have PureData System for Transactions integrated with PureApplication System, and the Availability / Scalability value is High:

    DB2 pureScale instance

    The name of the DB2 pureScale instance that hosts the database.

    Host

    IP address at which the remote database server is located.

    Port

    Specifies the TCP/IP port on which the remote database is listening for inbound requests.

    Database Level

    The latest software update currently running on the database.

    Status

    The current state of the database. The Log link accesses the viewer where you can view activity logs.

    Database Storage

    The storage capacity that is allocated to table spaces, logs, and mirrored logs.


Stop and start databases

Stop and start database instances using IBM Database Patterns.

Note: Administrators can stop and start all databases. Other users can stop only the databases that each user has created.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the database pattern page by using one of the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. On the left side of the Database Instances page, click the name of the database you want to stop or start.

  4. To stop a database with a status of "running", click the Stop icon and click Confirm in the dialog box.

  5. To restart a database with a status of "stopped", click the Start icon and click Confirm in the dialog box.

Change of state actions are recorded in the Log files.


Upgrade databases

You can upgrade a database to use the latest version of IBM Database Patterns.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the database pattern page by using one of the the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. Click the name of the database you want to upgrade.

  4. Click the Upgrade icon, then click Yes to confirm this operation. The system is stopped while this operation is executed.

  5. Click OK in the dialog box.

  6. When the operation is complete, the Upgrade icon will be unavailable.


Delete databases

You can delete database created with IBM Database Patterns.

Unless you have administrator privileges, you can delete only the databases that you have created.

If you have administrator privileges, you can delete any database.

This function destroys all the data in the database, drops the database instance and removes the database name from the list of databases.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the databases page by using one of the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. On the Databases Instances page, click the name of the database you want to destroy.

  4. Click the Delete icon. You can perform this operation on a database in any status.

  5. Click Confirm in the dialog box. The Status of the database changes to Stopping then Terminating.

  6. When the operation is complete, confirm that the database no longer appears in the list.


Administer databases

You can use IBM Database Patterns to perform certain administrative operations on a deployed database.


Add public keys

You can add or update a public key using IBM Database Patterns for access to perform maintenance tasks on databases.

You must have previously obtained a private key to perform this operation.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Depending on how the instance was created, choose the instance to access in one of the following ways:

    • If you created your database using a Database Pattern, then from the Welcome page under Working with databases click View Database. Alternatively, choose Instances > Databases
    • If you created your database using a Virtual Application, then from the Welcome page under Working with virtual applications click View Virtual Application Instances. Alternatively, choose Instances > Virtual Applications.

  3. From the list on the Database Instances page, click a name to see details of that database.

  4. Click the Manage icon.

  5. On the Operation tab, choose SSH from the pane.

  6. Expand Add or update SSH public key.

  7. Add your public key in the text box.

  8. Click Submit.

  9. Click the Refresh icon to update the status of this operation in the list.

You may now use public and private keys to access the DB2 virtual machine and perform administrative tasks.


Change DB2 administrator password

You must change the password of the DB2 administrator (db2inst1) user as a prerequisite to using certain IBM tools. You must add a public key for Secure Shell (SSH) access to the DB2 virtual machine before you perform this operation.

  1. Logon to the DB2 virtual machine using a virtual user ID, virtuser.

  2. On ESX systems:

    1. Execute the following command: sudo su -
    2. Enter a new password for db2inst1.

You can now use the password to access the DB2 virtual machine as the DB2 administrator.


Upload a DB2 fix pack

You can upload a DB2 fix pack using IBM Database Patterns for use in future updates.

You must be logged in as an administrator to perform this task.

For DB2 9.7, only fix packs 6 and later are supported.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.
  2. Choose Catalog > DB2 Fix Packs.
  3. Click the New icon.
  4. Add details to the DB2 fix pack name and Description fields in the pane and click Save. The fix pack name must have a tar.gz extension.
  5. Click on the name of your new fix pack in the panel on the left.
  6. In the DB2 fix pack file field, click Browse, navigate to the location of the fix pack and click Open.


Delete a DB2 fix pack

You can delete a DB2 fix pack from the updates stored in IBM Database Patterns.

You must be logged in as an administrator to perform this task.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.
  2. Choose Catalog > DB2 Fix Packs.
  3. From the list on the DB2 Fix Packs page, choose the fix pack you want to delete.
  4. Click the Delete icon.
  5. Click Confirm in the dialog box.


Back up a workload database without using Tivoli Storage Manager

Workload database backups can be run with any of the following methods:

RESTORE DATABASE command

Manage data growth

Build a recovery strategy for an IBM Smart Analytics System data warehouse

Use the IBM Workload Plug-in Development Kit


Use the Database Service Console

The Database Service Console is the administrative section of IBM Database Patterns.

The Database Service Console appears in a separate window within the application interface. It is reached by clicking the Manage icon while viewing the details of a deployed database.

Several administrative tasks are performed through the Console.


Back up databases manually

You can create a full online backup of a deployed database to the IBM Tivoli Storage Manager (TSM) server using IBM Database Patterns.

To use Tivoli Storage Manager for creating backup images of a database, you must configure Tivoli Storage Manager prior to deploying that database. To create backup images of databases deployed before Tivoli Storage Manager was configured, contact the Operations group.

When Tivoli Storage Manager is configured, the backup scheduler automatically performs a daily database backup. However, you have the option of supplementing the automated database backup feature by performing a manual backup operation. Manually created backup images are never overwritten.

The time stamp is a 14-character representation of the date and time when you performed the backup operation. The time stamp is in the format yyyymmddhhnnss, where: yyyy represents the year, mm represents the month (01 to 12), dd represents the day of the month (01 to 31), hh represents the hour (00 to 23), nn represents the minutes (00 to 59), ss represents the seconds (00 to 59).

The backup image does not include customized OS users and groups, DB2 registry values or any other customized operating system data this is not part of a workload standard.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the databases page by using one of the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. In the Databases Instances pane, click the name of the database instance you want to back up.

  4. Click the Manage icon.

  5. On the Operations tab, choose the DB2 option from the list.

  6. Under Backup image management, expand Create a database image. If you have not yet configured Tivoli Storage Manager, you will not see this option.

  7. Specify a unique name for Image Name.

  8. Optional: Specify identifying details in Image Description.

  9. Click Submit.

  10. Click the Refresh icon above the list of backups at the bottom of the page to update the status of this operation as it processes. Completed backups display "Success" in the Result field. Failed backups display an error code.

  11. Expand List all database images to view current backup images.

RESTORE DATABASE command

Manage data growth

Build a recovery strategy for an IBM Smart Analytics System data warehouse

Restore your data


Back up databases automatically

You can schedule an automatic full online backup of a deployed database to the IBM Tivoli Storage Manager (TSM) server using IBM Database Patterns.

To use Tivoli Storage Manager for creating backup images of a database, you must configure Tivoli Storage Manager prior to deploying that database. To create backup images of databases deployed before Tivoli Storage Manager was configured, contact IBM Support.

When triggered, the scheduler automatically invokes an online backup with transaction logs enabled. It also records database and instance parameters, timestamps, Tivoli Storage Manager configuration details, backup image information and resource settings in a metadata file.

Images created this way are flagged as .Auto. under Image Type in the list of images. The maximum number of backup images that can be stored is seven; after this the oldest image is deleted automatically when a new image is created.

Image creation occurs at a predetermined time during an off peak window.

Note: Scheduled backups automatically create a database image on a one-time, daily, or weekly basis, or are disabled, as configured. The default setting is Daily.

The time stamp is a 14-character representation of the date and time when the backup operation was executed. The time stamp is in the format yyyymmddhhnnss, where: yyyy represents the year, mm represents the month (01 to 12), dd represents the day of the month (01 to 31), hh represents the hour (00 to 23), nn represents the minutes (00 to 59), ss represents the seconds (00 to 59).

The backup image does not include customized OS users and groups, DB2 registry values or any other customized operating system data this is not part of a workload standard.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the databases page by using one of the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. On the Databases Instances page, click the name of the database instance you want to back up.

  4. Click the Manage icon.

  5. On the Operations page, choose DB2 from the list.

  6. Under Backup image management, expand Automatic scheduled database backup.

  7. Choose an option under Frequency. To disable scheduled database backups, choose Off.

  8. Click Submit.

  9. Expand List all database images to view current backup images.

RESTORE DATABASE command

Manage data growth

Build a recovery strategy for an IBM Smart Analytics System data warehouse

Restore your data


Apply a DB2 fix pack

You can apply a DB2 fix pack using IBM Database Patterns to update your database system to the latest version.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the database pattern page by using one of the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. In the pane on the left side of the Databases Instances page, click the name of the database instance you want to update.

  4. Stop the database instance

  5. Click the Manage icon.

  6. On the Operations page, choose DB2 from the pane on the left side.

  7. Under Fundamental, expand Apply DB2 fix pack.

  8. Choose the fix pack you want to apply.

  9. Click Submit.

  10. Click the Refresh icon above the list at the bottom of the page to update the status of this operation as it processes. Completed fix pack updates display "Success" in the Result field. Failed updates display an error code.


Change application-level user passwords

You can change passwords or allow SSH access for designated DB2 users using IBM Database Patterns.

Application User is a user that you can use for application access to the database in DB2 to execute most operations. Application DBA is a user that you can use to manage and tune databases and manage privileges at the database level. These two users are provided by default as part of database provisioning. You may need to enable SSH access for these users to use certain IBM Tools.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the databases page by using one of the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. On the left side of the Database Instances page, click a name to see details of that database.

  4. Click the Manage icon.

  5. On the Operations page, choose DB2 from the pane on the left side.

  6. Under Backup image management, expand Update configuration.

  7. Specify new passwords for one or both of the users displayed.

  8. To permit SSH access for a user, choose Allow in the SSH access section.

  9. Click Submit.

  10. Click the Refresh icon to update the status of this operation in the list.


Create users

You can create users on the virtual machine with IBM Database Patterns.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the databases page by using one of the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. On the left side of the Database Instances pane, click a name to see details of that database.

  4. Click the Manage icon.

  5. On the Operations page, choose DB2 from the pane on the left side.

  6. Under Security, expand Create a new user on the virtual machine.

  7. Specify a unique name for User name.

  8. Specify and confirm a new password for Password.

  9. Optional: Click Select and choose each system-level authority that is required for the user. For more information about these authorities, see Authorization, privileges, and object ownership.

  10. To permit SSH access for a user, choose Allow.

  11. Click Submit.

  12. Click the Refresh icon to update the status of this operation in the list.


Reset user passwords

You can reset passwords for users on the virtual machine with IBM Database Patterns.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the database pattern page by using one of the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. On the left side of the Database Instances page, click a name to see details of that database.

  4. Click the Manage icon.

  5. On the Operations page, choose DB2 from the pane on the left side.

  6. Under Security, expand Reset password.

  7. Select a user from the dropdown list.

  8. Specify and confirm a new password for the user.

  9. Click Submit.

  10. Click the Refresh icon to update the status of this operation in the list.


Delete users

You can delete users on the virtual machine with IBM Database Patterns.

Note: You can not delete the database owner.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the databases page by using one of the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. On the left side of the Database Instances page, click a name to see details of that database.

  4. Click the Manage icon.

  5. On the Operations page, choose DB2 from the pane on the left side.

  6. Under Security, expand List all users on the virtual machine.

  7. Select a user from the list.

  8. Click Delete.

  9. Click the Refresh icon to update the status of this operation in the list.


Create database user groups

You can create user groups on your database with IBM Database Patterns.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the databases page by using one of the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. On the left side of the Database Instances page, click a name to see details of that database.

  4. Click the Manage icon.

  5. On the Database Console page, select Operations > User Groups.

  6. Click the add icon, specify a name in the dialog box, and click OK.

  7. Select a permission level for the user group:

    • SYSADM grants administrator permissions.
    • SYSCTRL grants controller permissions.
    • SYSMAINT grants maintenance permissions.
    • SYSMON grants monitoring permissions.
    • OTHER grants user-defined permissions.

  8. To change the user group permissions, repeat the previous step.


Delete database user groups

You can delete user groups on your database with IBM Database Patterns. Default user groups can not be deleted.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the databases page by using one of the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. On the left side of the Database Instances page, click a name to see details of that database.

  4. Click the Manage icon.

  5. On the Database Console page, select Operations > User Groups.

  6. Select a user group from the list.

  7. Click Delete, then in the dialog box click Confirm.

  8. Click the Refresh icon to update the status of this operation in the list.


View database log files

You can view the log files of a deployed database using IBM Database Patterns.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the databases page by using one of the the following steps:
    • From the Welcome page in Working with databases click View Database.
    • Click Instances > Databases.

  3. On the Databases Instances pane, click a name to see details of that database.

  4. Click the Manage icon or click the Log link in the Status field.

  5. On the Logging page, expand the name of your host in the panel on the left.

  6. Use the tree to navigate to the required log file and click on the name to view the details.


Work with database patterns

A database pattern is a predefined set of configurations used like a template to simplify and standardize the creation of databases.

You can create and manage database patterns using the IBM Database Patterns. A database pattern can be selected before deploying a new database instance. To access databases patterns, click Patterns > Database Patterns.


Create database patterns

You can create database patterns used to provision databases by using IBM Database Patterns.

You must accept licenses, and enable patterns before you can create a database pattern

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Click Patterns > Database Patterns.

  3. On the Database Patterns page, click the New icon.

  4. In the Database Pattern box, enter a value for each of the following fields:

    Database Pattern Name

    Specify the user-defined identifier for the pattern.

    Database Pattern Description

    Specify identifying details.

    Availability / Scalability

    This field is displayed if you have PureData System for Transactions integrated with PureApplication System.

    • Select Standard if you want your pattern to provision databases on the PureApplication System.
    • Select High if you want your pattern to provision DB2 pureScale databases on the PureData System for Transactions.

      You must have a DB2 pureScale instance deployed on the PureData System for Transactions before you can provision a DB2 pureScale database.

    Purpose

    This field is displayed if you are using a PureApplication System with no integration of PureData System for Transactions or you are using a PureData System for Transactions integrated with PureApplication System and selected Standard in the Availability / Scalability field.

    Select Production to indicate that the database is deployed in a live business environment.

    Source

    Applying a default database workload standard

    This source option applies predefined database configurations to create a database.

    " > Apply database workload standards

    This source option uses a set of user-defined database configurations to create a database instance.

    Clone from a database image

    This source option uses an existing database as a model for create a database instance.

    Follow the instructions on the linked page for the option you select and return to this page to complete this task.

  5. Click Save.

  6. Click the Refresh icon to update the status of this operation in the list.

  7. To grant access to the pattern to additional users, choose the user name from the list in the Access granted to pane.

    • Toggle read and write access for this user by clicking the Read and Write links.
    • Click Remove to delete access for this user.

Loading data into a database


Deploy databases from database patterns

You can deploy databases by using database patterns with IBM Database Patterns.

If you have PureData System for Transactions integrated with PureApplication System and you want to deploy a high availability database pattern, you must have a DB2 pureScale instance deployed on your PureData System for Transactions system.

When you deploy a database from a database pattern with a Source of Clone from a database image, deploy onto the same Platform and OS that was used in the source database image.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the database pattern page by using one of the the following steps:

    • On the Welcome page in Working with databases click Create Database Pattern.
    • Click Patterns > Database Patterns.

  3. On the Database Patterns page, choose a pattern from the list.
  4. Click the Deploy icon.

  5. Depending on the database pattern you chose, perform the following steps in the dialog box:

    • If you are deploying a standard availability database pattern, complete the dialog box as follows:

      1. Specify a name for Database name. The name must be no more than eight characters, begin with a letter, and can contain an underscore (_) but no other special characters. (-
      2. Optional: Specify identifying details in Database description.
      3. Choose IPv4 or IPv6 based on your expected traffic.
      4. Choose Profile, Cloud Group, and other options, if any, from the drop-down menus based on your user access rights and assignments.
      5. Optional: Expand Advanced and enter an SSH key or click the Generate icon to generate an SSH key automatically.

        An SSH key is required to access the virtual machine for certain administrative tasks. After generating an SSH key, click Download and save the key to your local system to facilitate access to the virtual machine even if the system loses connectivity or encounters problems.

      6. Click OK.

    • If you are deploying a high availability database pattern, complete the dialog as follows. High availability database patterns are available only when you have PureData System for Transactions integrated with PureApplication System.

      1. Specify a name for Database name. The name must be no more than eight characters, begin with a letter, and can contain an underscore (_) but no other special characters. (-
      2. Optional: Specify identifying details in Database description.
      3. Enter a value for the Default user field.

        You can either specify a user that exists on the PureData System for Transactions or a new user. The new user is created on the PureData System for Transactions.

      4. Enter a value for the Password field.

        If you specify an existing user, the specified password must be the same as that user password on the PureData System for Transactions. If you specify a new user, that new user is created on the PureData System for Transactions and assigned the password that you specify.

      5. Enter a value for the DB2 pureScale instance field. Specify the instance that hosts the database when it is deployed.

        You must have deployed a DB2 pureScale instance on the PureData System for Transactions before you can deploy a high availability database pattern.

      6. Click OK.

  6. Choose Instances > Databases to see the database you are deploying in the list of databases. Click the Refresh icon to update the status of this operation in the list. Click your database name and expand the History heading to see the steps of deployment.

A database instance is available when the status in the list of databases is "Deployed" and the Running icon is displayed.

Loading data into a database


Modify existing database patterns

You can edit the database patterns used to provision databases by using IBM Database Patterns.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Click Patterns > Database Patterns.

  3. From the list, click the pattern name.

  4. On the Database Patterns page, click the Open icon.

  5. In the Database Pattern box, enter a value for each of the following fields:

    Database Pattern Name

    Specify the user-defined identifier for the pattern.

    Database Pattern Description

    Specify identifying details.

    Availability / Scalability

    This field is displayed if you have PureData System for Transactions integrated with PureApplication System.

    • Select Standard if you want your pattern to provision databases on the PureApplication System.
    • Select High if you want your pattern to provision DB2 pureScale databases on the PureData System for Transactions.

      You must have a DB2 pureScale instance deployed on the PureData System for Transactions before you can provision a DB2 pureScale database.

    Purpose

    This field is displayed if you are using a PureApplication System with no integration of PureData System for Transactions or you are using a PureData System for Transactions integrated with PureApplication System and selected Standard in the Availability / Scalability field.

    Select Production to indicate that the database is deployed in a live business environment.

    Purpose

    Select Production to indicate that the database is deployed in a live business environment.

    Source

    Applying a default database workload standard

    This source option applies predefined database configurations to create a database.

    Applying database workload standards

    This source option uses a set of user-defined database configurations to create a database instance.

    Clone from a database image

    This source option uses an existing database as a model for create a database instance.

    Follow the instructions on the linked page for the option you select and return to this page to complete this task.

  6. Click Save.

  7. Click the Refresh icon to update the status of this operation in the list.

  8. To grant access to the pattern to additional users, choose the user name from the list in the Access granted to pane.

    • Toggle read and write access for this user by clicking the Read and Write links.
    • Click Remove to delete access for this user.


Delete database patterns

You can delete the database patterns used to provision databases using IBM Database Patterns.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Access the database pattern page by using one of the following steps:

    • On the Welcome page in Working with databases click Create Database Pattern.
    • Click Patterns > Database Patterns.

  3. In the list, click the name of the database pattern you want to delete.

  4. Click the Delete icon.

  5. Click Yes in the dialog box. Deleting a pattern does not affect any databases previously deployed from that pattern.


Work with customized workload standards

A workload standard is a set of configuration settings you can use to deploy a defined type of database. You can use IBM Database Patterns to change some of these configurations and create customized workload standards.

IBM Database Patterns provides default workload standards for deploying databases. Customized workload standards allow you to define specific tuning requirements that may be required for mature applications or workloads.

Each workload standard includes metadata and a package. Metadata is entered through the interface by an administrator or through the CLI or REST API. The workload standard package is a .zip file comprised of several directories containing a series of scripts.

Each customized workload standard should be treated as a corporate standard. The workload standard is replicated with each deployment. Customized workload standards should receive a strict review and be tested before deployment.


Create customized workload standards

You can create a database workload standard with user-defined settings using IBM Database Patterns.

You must be logged in as an administrator to perform this task. All fields except Description are required.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Choose Catalog > Database Workload Standards.

  3. On the Database Workload Standards page, click the New icon.

  4. In the Workload Standard box:

    1. Enter values for the following fields. Use default values where they are supplied.

      Name

      A unique name is recommended.

      Description

      Identifying details provided by the user.

      Workload type

      Choose an option from the menu. Departmental Transactional indicates databases deployed from this workload standard will be used in a transactional environment. Data Mart indicates databases to be used in an analytics environment.

      Initial disk size

      Specify the space available when the database is first deployed. The value must be an integer from 0 to 500.

      Storage multiplier

      Used with initial disk size to determine the user data size. The default value is recommended. The range of permitted values is 1.0 to 3.0

    2. Click Browse in Upload file and navigate to the location of the .zip file containing the workload standard scripts. The file must have a .zip extension and cannot begin with a number or an underscore. The file name must be 1 to 18 characters in length and can contain any alphanumeric characters.

    3. Click Save.

Note: You can view both default and customized workload standards in the list displayed.

When you deploy a database from a customized workload standard, do not specify Database compatability mode or upload a Schema File


Modify customized workload standards

You can edit previously created customized database workload standards using IBM Database Patterns.

You must be logged in as an administrator to perform this task.

Note: The two default database workload standards, Departmental Transactional and Data Mart, can not be edited.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Choose Catalog > Database Workload Standards.

  3. On the Database Workload Standards page, click the Open icon.

  4. In the Workload Standard box:

    1. Edit values for the following fields. Use default values where they are supplied.

      Name

      A unique name is recommended.

      Description

      Identifying details provided by the user.

      Workload type

      Choose an option from the menu. Departmental Transactional indicates databases deployed from this workload standard will be used in a transactional environment. Data Mart indicates databases to be used in an analytics environment.

      Initial disk size

      Specify the space available when the database is first deployed. The value must be an integer from 0 to 500.

      Storage multiplier

      Used with initial disk size to determine the user data size. The default value is recommended. The range of permitted values is 1.0 to 3.0

    2. Click Browse in Upload file and navigate to the location of the .zip file containing the workload standard scripts. The file must have a .zip extension and cannot begin with a number or an underscore. The file name must be 1 to 18 characters in length and can contain any alphanumeric characters.

    3. Click Save.

Note: You can view both default and customized workload standards in the list displayed.

When you deploy a database from a customized workload standard, do not specify Database compatability mode or upload a Schema File


Export customized workload standards

You can export customized workload standards using IBM Database Patterns.

You must be logged in as an administrator to perform this task.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.
  2. Choose Catalog > Database Workload Standards.
  3. From the list click the name of the workload standard you want to export.
  4. Click the Export icon.
  5. Click OK in the dialog box. The workload standard is saved to the system.


Delete customized workload standards

You can delete customized workload standards using IBM Database Patterns.

You must be logged in as an administrator to perform this task.

Note: The Data Mart and Departmental Transactional workload standards, provided by default, can not be deleted.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.
  2. Choose Catalog > Database Workload Standards.
  3. From the list click the name of the workload standard you want to delete.
  4. Click the Delete icon.
  5. Click Yes in the dialog box. Deleting a workload standard does not affect any databases previously deployed from that standard.


Modify scripts for customized workload standards

You must tune scripts that compromise a customized workload standard to produce the configurations you require.

Note:

Certain DB2 configuration changes will prevent a database from working with the database pattern automation, introducing risk in consistent deployment of databases. Changes to configurations that reference tables spaces and bufferpools, backup facilities, the authentication server, containers, instance names, monitoring agents, and external tooling should be tested thoroughly before deployment in a live environment.

The following configuration parameters can be tuned with less risk:


Define customized workload standard packages

You must create a package of scripts that execute the configuration changes you require to create a customized workload standard.

The workload standard package is a .zip file containing the following first-level directories. Each subdirectory is self-contained and has an entry script that invokes other scripts or files under it when called. The create_db subdirectory and its entry script create_db.sh are mandatory, all other subdirectories are optional. All scripts are executed as the OS user db2inst1.

The scripts are invoked in the following order:

tune_inst.shcreate_db.sh (mandatory)
tune_db.sh
init_db.sh
post_start_inst.sh


List all script details...

Subdirectory Required? Entry script name Parameters Description
create_db Mandatory create_db.sh

Sample script:

inst_name=$1
db_name=$2
inst_sqltype=$3
db_path=$4
 
db2 "CREATE DATABASE ${db_name} ON ${db_path} 
     USING CODESET UTF-8 TERRITORY US 
     COLLATE USING SYSTEM PAGESIZE 8 K"
rc=$?
if [[ ${rc} -ne 0 ]] ; then    echo "Failed to create database."
   exit ${rc}
fi

instance name

database name

Parameters are specified in the following format:

instance name=$1

database name=$2

sqltype=$3

DBPATH must be located at /home/db2inst1

This directory includes the scripts to create the database.

It is invoked to create the database after the db2 instance is created.

The database, including WAS, is not available for use in case of failure

The script returns .0. when successful. Other returned values, for example .-1., indicate failure. Check the log files (console.log, trace.log, error.log) for errors.

tune_inst Optional tune_inst.sh instance name This directory includes the scripts or files to tune the db2 instance. It is invoked to configure the dbm cfg parameter after the db2 instance is created. Write the script to tune the db2 instance using the follow format:
db2 "update dbm cfg using MAXAGENTS 10"
db2 "update dbm cfg using NUM_POOLAGENTS 8"
db2 "update dbm cfg using NUM_INITAGENTS 2"

The script returns .0. when successful. Other returned values, for example .-1., indicate failure.

post_start_inst Optional post_start_inst.sh instance name
database name
This directory includes the scripts or files to start certain db2-related processes after the db2 instance starts. It is automatically invoked to begin processes such as start db2 Text Search after each start of the db2 instance. The script is executed after the db2 instance is created. The script returns .0. when successful. Other returned values, for example .-1., indicate failure.
tune_db Optional tune_db.sh

Sample script:

#!/bin/sh
 
inst_name=$1
db_name=$2
default_user=$3
default_password=$4
 
echo "========== tune_db.sh start =========="
db2 "connect to $db_name"
 
## Create BUFFERPOOLs
db2 "CREATE BUFFERPOOL LGBP PAGESIZE 32K"
 
## Create TABLESPACEs
db2 "CREATE TABLESPACE LGTBS PAGESIZE 32K BUFFERPOOL LGBP"
 
## Tune DB, specify other db cfg update command here
db2 "UPDATE DB CFG USING LOGFILSIZ 25600 DEFERRED"
 
## If the database user needs to have specific authorizations, describe them here
## db2 "GRANT DBADM ON DATABASE TO USER $default_user"
db2 "connect reset"
db2 "terminate"
 
echo "========== tune_db.sh end =========="

  • instance name
  • db_name
  • appuser
  • appuser password
  • appdba
  • appdba password

This directory includes scripts to tune the database.

It is invoked to create tablespaces or grant privileges to appuser/appdba after the database is created.

The script returns .0. when successful. Other returned values, for example .-1., indicate failure.

init_db Optional init_db.sh

Sample script:

#!/bin/sh
 
inst_name=$1
db_name=$2
default_user=$3
default_password=$4
 
echo "========== init_db.sh start =========="
## Connect to DB by the database default user  
db2 "CONNECT TO ${db_name} USER ${default_user} USING ${default_password}"
 
## Execute the DDL, in this case "@" terminator character is used  
db2 +p -s -v -td@ -f cr_tab.sql
 
# In case of loading the initial data by db2move (Package the data files in the same folder)
 
# db2move ${db_name} import -io insert
db2 commit work
db2 connect reset
db2 terminate
 
echo "========== init_db.sh end =========="

cr_tab.sql

Sample script:

create table KLUSER.LGTAB(col1 number(10), col2 VARCHAR2(20))@

  • instance name
  • db_name
  • appuser
  • appuser password
  • appdba
  • appdba password

This directory includes scripts or files to grant privileges and to create database objects such as schema, tables, views, procedures, and functions.

It is invoked to create database objects or load data into database after the database is created.

The script returns .0. when successful. Other returned values, for example .-1., indicate failure.


Create customized workload standard packages

You can create a package for customized workload standards using IBM Database Patterns.

For optimal results, test each of your scripts individually before adding your scripts into a formal workload standard. The scripts in your workload standard are shell scripts that run in the same manner for both manual and automated deployments.

The .zip file name should be 1 to 18 characters in length. Character strings can contain any alphanumeric characters and cannot begin with a number or an underscore.

  1. Create the create_db.sh entry script and optional subdirectories as required for customizing your workload standard.
  2. Test your scripts in your local environment.
  3. Create a .zip file using your preferred tool.
  4. Upload your workload standard

Deploy a database from your newly created workload standard, then log onto the virtual machine as the DB2 instance owner and confirm that your specified configurations were implemented.


Move data into a database

You can populate a new database created with IBM Database Patterns by moving data from an existing database.

The existing database must be at the same version level as the new database receiving the migrated data. For example, data from a DB2 v9.7 database should only be moved into a DB2 v9.7 database.

Test each of your scripts individually before adding your scripts into a formal workload standard. The scripts in your workload standard are shell scripts that run the same on a newly deployed database manually for testing purposes or as part of the automated deployment.

The .zip file name should be 1 to 18 characters in length. Character strings can contain any alphanumeric characters and cannot begin with a number or an underscore.

Other data movement methods, such as db2move, are possible but might be more complex. See the related links for more details.

  1. Perform a full online backup with include logs on the source database.

  2. Create a customized workload standard package.

    1. Create a customized workload standard directory, and within that create the create_db directory.

    2. Create the create_db.sh script to invoke your database backup image and place it in the create_db directory. For more details on writing the script, see:

    3. Place the backup image in the create_db directory.

    4. Create and upload the workload standard package .zip file using the CLI. The GUI is not suitable for upload due to size limitations. Structure the zip file package similar to the following example:

      datamove.zip
        create_db
          create_db.sh
          [database backup image]
      

  3. Deploy a new database with this customized workload.

  4. Connect to the database to verify that it is functional.

  5. Log on to the virtual machine as the DB2 instance owner and confirm that your specified configurations were implemented.

  6. Delete the customized workload standard if it is not needed.

Performing a redirected restore operation

Data movement options

Backup and restore operations between different operating systems and hardware platforms


Use the command-line interface for the workload console

To perform administrative functions for the workload console, you can download and run the command-line interface from a local machine.


Retrieve details of database pattern types

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

Note: The value of {dbaas_patterntype} could be dbaas, dbaas.std.oltp or dbaas.std.datamart


Retrieve database pattern type details.

Example deployer.patterntypes.get("dbaas.std.oltp","1.1")
Output

{
  "description": "IBM Transactional Database Pattern",
  "name": "IBM Transactional Database Pattern",
  "required": (nested object),
  "shortname": "dbaas.std.oltp",
  "status": "avail",
  "version": "1.1"
}


Accept license agreements for database pattern types

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

Note: The value of {dbaas_patterntype} could be dbaas.std.oltp or dbaas.std.datamart

Accept license agreements for database pattern type.

Example deployer.patterntypes.get('dbaas.std.oltp', '1.1.0.0').acceptLicense()
Output {'status':'accepted'}


Enable database pattern types

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

Note: The value of {dbaas_patterntype} could be dbaas, dbaas.std.oltp or dbaas.std.datamart


Enable database pattern type details.

Example deployer.patterntypes.get('dbaas', '1.1.0.0').enable()
Output {'status':'avail'}


Disable database pattern types

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

Note: The value of {dbaas_patterntype} could be dbaas, dbaas.std.oltp or dbaas.std.datamart


Disable database pattern type details.

Example deployer.patterntypes.get('dbaas', '1.1.0.0').disable()
Output { 'status': 'deprecated'}


Retrieve plug-in configuration information

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Retrieve plug-in configuration information details.

Example deployer.plugins.getConfig('oltp/1.1.0.0')
Output

{
    'metadata': [
        {
            'label': 'Environment',
            'displayId': 'Environment',
            'type': 'string',
            'id': 'parms.environment',
        'options':[
              {
            'value': None, 
            'name': 'None(disabled)'
              } 
              {
            'value': 'PROD', 
            'name': 'Only IBM Transactional Database Pattern'
              } 
              {
              'value': 'NONPROD', 
              'name': 'Only IBM Transactional Database Pattern for Non-Production Environment'
              }
              {
              'value': 'BOTH', 
              'name': 'Both'
                        }
                    ]
                }
            ],
    'config': {
        'version': '1.1.0.0',
        'patterntypes': {
            'secondary': [
                {
                    'dbaas': '1.1'
                }
            ],
            'primary': {
                'dbaas.std.oltp': '1.1'
            }
        },
        'parms': {
            'environment': 'PROD'
            'dbaas_standard': True
        },
        'packages': {
            'oltp.prod': [
                {
                    'node-parts': [
                        {
                            'node-part': 'nodeparts/license.tgz',
                            'parms': {
                                'tagfile': 'Transactional_Database_Pattern.1.1.0.swtag'
                            }
                        }
                    ]
                }
            ],
            'oltp.nonprod': [
                {
                    'node-parts': [
                        {
                            'node-part': 'nodeparts/license.tgz',
                            'parms': {
                                'tagfile': 'Transactional_Database_Pattern_for_Non_Production_Environment.1.1.0.swtag'
                            }
                        }
                    ]
                }
            ]
        },
        'name': 'oltp'
    }
}


List all database patterns

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


List all database patterns details.

Example deployer.applications._list({"app_type":"database"})
Output

{
  "access_rights": (nested object),
  "acl": (nested object),
  "app_id": "a-5f733bb6-4be8-4c74-852f-055f09274192",
  "app_name": "pattern1",
  "app_type": "database",
  "artifacts": (nested object),
  "content_md5": "D086CC2D8E54E7E86E159085392DD6D8",
  "content_type": "application/json",
  "create_time": "2011-09-08T03:27:46Z",
  "creator": "cbadmin",
  "last_modified": "2011-09-08T03:27:48Z",
  "last_modifier": "cbadmin"
}


Update database patterns

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Update a database pattern details.

Example deployer.applications.get("a-5f733bb6-4be8-4c74-852f-055f09274192").update("/home/updateDBPattern.json")
Input file: createDBPattern.json

{"model":
{"app_type":"database",
"patterntype":"dbaas",
"version":"1.1",
"name":"db_pattern1",
"description":"
","nodes":
[{"attributes":
{"purpose":"production",
"source":"defaultWorkloadStandardApproach",
"dbDiskSize":30,
"workloadStandard":"departmental_OLTP"
"sqlType":"DB2"
"dbTerritory":"US"
"dbCodeset":"UTF-8"
"dbCollate":"SYSTEM"
"versionName":"V97Linux"
"fixpackName":"db2_hybrid_en-9.7.0.5-linuxx64-20120112.tgz"}
"type":"DB2"
"id":"database"}]}}
Output No output. View the details of the updated pattern type to verify that the command executed successfully.


Create all database patterns

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Create a database pattern details.

a) Applying a default database workload standard. Example deployer.applications.create("D:\\deployer.cli\\bin\\appmodel-default.json")
Input file: appmodel-default.json

    {"model":{"app_type":"database", "patterntype":"dbaas", "version":"1.1", "name":"dbp_default", "description":"", "nodes":[{"attributes":{"purpose":"production", "source":"defaultWorkloadStandardApproach", "dbDiskSize":30, "workloadStandard":"departmental_OLTP", "sqlType":"DB2", "dbTerritory":"US", "dbCodeset":"UTF-8", "dbCollate":"SYSTEM", "versionName":"V97Linux", "fixpackName":"db2_hybrid_en-9.7.0.5-linuxx64-20120112.tgz"}, "type":"DB2", "id":"database"}]}}

b) Applying a customized database workload standard. Example deployer.applications.create("D:\\deployer.cli\\bin\\appmodel-customized.json")
Input file: appmodel-customized.json

    {"model":{"app_type":"database", "patterntype":"dbaas", "version":"1.1", "name":"dbp_cust", "description":"", "nodes":[{"attributes":{"purpose":"production", "source":"workloadStandardApproach", "cusDbDiskSize":30, "cusWorkloadStandard":"8e64a636-7920-4e11-a471-55ffb3e7b75c", "cusVersionName":"V97Linux", "cusFixpackName":"db2_hybrid_en-9.7.0.5-linuxx64-20120112.tgz"}, "type":"DB2", "id":"database"}]}}

c) Cloning from a database image. Example deployer.applications.create("D:\\deployer.cli\\bin\\appmodel-clone.j son")
Input file: appmodel-clone.json

{"model":{"app_type":"database",
"patterntype":"dbaas",
"version":"1.1",
"name":"clone_dbp",
"description":"",
"nodes":[{"attributes":{"purpose":"production",
"source":"cloneApproach",
"databaseImage":"auto_172.16.68.124_mydb6_20120511230014.json"},
"type":"DB2",
"id":"database"}]}}

Sample output

{
  "access_rights": (nested object),
  "acl": (nested object),
  "app_id": "a-aa0581a8-c0e1-47ce-8909-686f1b588edf",
  "app_name": "dbp_default",
  "app_type": "database",
  "artifacts": (nested object),
  "content_md5": "3EB002BE6EB3CE941FDE95668B283E35",
  "content_type": "application/json",
  "create_time": "2012-05-14T02:30:34Z",
  "creator": "cbadmin",
  "last_modified": "2012-05-14T02:30:34Z",
  "last_modifier": "cbadmin",
  "patterntype": "dbaas",
  "version": "1.1"
}


Upload scripts for database patterns

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Upload a script file for a database pattern specified by Database Pattern ID details.

Example deployer.applications.get("a-5f733bb6-4be8-4c74-852f-055f09274192").artifacts.upload("/home/lyy/my/testdb.sql")
Output {'file': 'artifacts/testdb.sql', 'file_name': 'testdb.sql', 'fileName': 'testdb.sql'}


Delete database patterns

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Delete a database pattern specified by Database Pattern ID details.

Example deployer.applications.delete("a-5f733bb6-4be8-4c74-852f-055f09274192")
Output No output. View the list of all pattern types to verify if the pattern type has been removed.


List all databases

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


List all databases details.

Example deployer.databases.getlist()
Output

[
    {
        'status': 'TERMINATED',
        'creator_name': 'cbadmin',
        'dbname': 'mydb',
        'start_time': '2011-10-18T06: 55: 46.661Z',
        'id': 'd-fc0c1425-8daf-41ba-ac6e-7413d7f4bc87',
        'creator': 'cbadmin'
    },
    ...
]


Create databases

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Deploy a database specified by Database Pattern ID details.

Example deployer.applications.get("a-aa0581a8-c0e1-47ce-8909-686f1b588edf ").deploy("defdb",deployer.clouds[0],None,{"database.dbname":"defdb"})
Output

{
  "acl": (nested object),
  "app_id": "a-aa0581a8-c0e1-47ce-8909-686f1b588edf",
  "app_type": "database",
  "appmodel": "https://172.16.65.62:9444/storehouse/user/deployments/d-d497f877-
4526-4578-8e4d-2662fd1125f2/appmodel.json",
  "deployment": "https://172.16.65.62:9444/storehouse/user/deployments/d-d497f87
7-4526-4578-8e4d-2662fd1125f2/deployment.json",
  "deployment_name": "defdb",
  "id": "d-d497f877-4526-4578-8e4d-2662fd1125f2",
  "maintenance_mode": False,
  "operations": (nested object),
  "role_error": False,
  "start_time": "2012-05-14T02:36:39.017Z",
  "status": "LAUNCHING",
  "topology": "https://172.16.65.62:9444/storehouse/user/deployments/d-d497f877-
4526-4578-8e4d-2662fd1125f2/topology.json"
}


Retrieve database details

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Retrieve database details specified by Database ID details.

Example deployer.databases.get("d-201fd95b-31c4-403a-8169-7c976da57a2f")
Output

{
    'status': 'TERMINATED',
    'description': '',
    'dbname': 'mydb',
    'id': 'd-fc0c1425-8daf-41ba-ac6e-7413d7f4bc87',
    'creator': 'cbadmin',
    'sqlType': 'DB2'
}


Delete databases

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Delete a database specified by Database ID details.

Example deployer.virtualapplications.delete("d-201fd95b-31c4-403a-8169-7c976da57a2f")
Output No output. View the list of all databases to verify the database has been removed.


List database images

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


List all database backup images details.

Example deployer.dbimages.getlist()
Output [{'tsmnodename': 'd-3651868c-99d7-4fd4-89fb-02a6f0b01a6d', 'timestamp': '20110518092615', 'dbname': 'mydb', 'imagename': 'bk30', 'id': 'myimage2.json', 'imagedescription': 'bk', 'host': '172.16.37.180'}]

deployer.dbimages.getlist({"dbaasversionge":< dbaas_version>})


List database images by dbaasversionge details.

Example deployer.dbimages.getlist({"dbaasversionge":"1.1"})
Output [{'tsmnodename': 'd-3651868c-99d7-4fd4-89fb-02a6f0b01a6d', 'timestamp': '20110518092615', 'dbname': 'mydb', 'imagename': 'bk30', 'id': 'myimage2.json', 'imagedescription': 'bk', 'host': '172.16.37.180'}]


Create database images

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

Note: Creating a backup database image includes two parts: 1) create a database image manually 2) automatically create a database image


Create a database image specified by Database ID manually details.

Example deployer.virtualapplications.get("d-201fd95b-31c4-403a-8169-7c976da57a2f").operations.create({"role": "database-db2.DB2","type": "backup","global": "false", "parameters": {"imageName": "testimage", "imageDescription": "My database image for testdb"},"script": "backup.py","method": "backup","roleType": "DB2"})
Output

{
    "operation_id": "o-dc099918-e727-403e-aeb6-07ab4c8a5407",
    "parameters": (nested object),
    "result": (nested object),
    "role": "database-db2.DB2",
    "virtualapplication": (nested object)
  }


Create a database image specified by Database ID automatically details.

Example deployer.virtualapplications.get("d-201fd95b-31c4-403a-8169-7c976da57a2f").operations.create({"role":"database-db2.DB2","type":"auto-backup", "parameters":{"frequency":"daily"}})
Output

{
  "operation_id": "o-8baaa258-640f-44a6-ad58-256fa70c12dc",
  "parameters": (nested object),
  "result": (nested object),
  "role": "database-db2.DB2",
  "virtualapplication": (nested object)
}


Retrieve details of database images

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Retrieve database image information specified by Database Image ID details.

Example deployer.dbimages.get("myimage.json")
Output {'tsmnodename': 'd-3651868c-99d7-4fd4-89fb-02a6f0b01a6d', 'dbaasversion': '1.1', 'backupmode': 'ONLINE', 'timestamp': '20110518092615', 'dbname': 'mydb', 'imagename': 'bk30', 'backuptype': 'NORMAL', 'imagedescription': 'bk', 'host': '172.16.37.180'}


Change database user passwords

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Change appuser/appdba password specified by Database ID details.

Example deployer.virtualapplications.get("d-201fd95b-31c4-403a-8169-7c976da57a2f").operations.create({"role": "database-db2.DB2","type": "configuration","global": false, "parameters": {"DB2.PASSWORD": "NQitXSzfpcZ6L3", "DB2.APPDBAPASSWORD": "8ga0AOQ79dkk1VwCQh"},"script": "change.py","method": "configuration","roleType": "DB2"})
Output

{
"role": "database-db2.DB2",
"type": "configuration",
"global": false,
"parameters": {
{"DB2.PASSWORD": "NQitXSzfpcZ6L3",
"DB2.APPDBAPASSWORD": "8ga0AOQ79dkk1VwCQh"}
},
"script": "change.py",
"method": "configuration",
"roleType": "DB2"
}


Retrieve database workload lists

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Get database workload list details.

Example deployer.dbworkloads.getlist()
Output

[
   {
      "rate": "3",
      "workload_type": "Departmental OLTP",
      "workload_file": "departmental_OLTP.zip",
      "is_system": "true",
      "version": "1.1.0.1",
      "initial_disk_size": "1",
      "name": "Departmental Transactional",
      "id": "departmental_OLTP",
      "description": "For databases primarily used for online transaction processing (OLTP). The database will be optimized for transactional applications."
   },
   {
      "rate": "3",
      "workload_type": "Dynamic Data Mart",
      "workload_file": "dynamic_datamart.zip",
      "is_system": "true",
      "version": "1.1.0.1",
      "initial_disk_size": "1",
      "name": "Data Mart",
      "id": "dynamic_datamart",
      "description": "For databases primarily used for data warehousing. The database will be optimized for reporting applications."
   },
   {
      "rate": "3",
      "workload_type": "Departmental OLTP",
      "workload_file": " customized_oltp.zip",
      "is_system": "false",
      "version": "1.1.0.1",
      "initial_disk_size": "1",
      "name": "dwl1",
      "id":"121b79d0-9faf-457c-9bda-67b4864c115d",
      "description": "the first one"
   }
]


Retrieve database logs

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Get log list from a database VM specified by Database ID and Virtual Machine ID details.

Example deployer.loggings.getLogList(deployer.virtualapplications.get("d-50dedbbc-a0f4-46ce-8788-8c09fe51f096"),"database-db2.11316173053230")
Output

{'DB2': ['/home/db2inst1/sqllib/log/instance.log',
'/home/db2inst1/sqllib/db2dump/db2diag.log', 
'/home/db2inst1/sqllib/db2dump/db2inst1.nfy', 
'/home/db2inst1/sqllib/db2dump/stmmlog/stmm.0.log'], 
'IWD Agent': ['/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/database-db2.11316173053230.DB2/console.log', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/database-db2.11316173053230.DB2/trace.log', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/database-db2.11316173053230.SSH/console.log', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/database-db2.11316173053230.SSH/trace.log', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/database-db2.11316173053230.AGENT/console.log', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/database-db2.11316173053230.AGENT/trace.log', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/console.log.0', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/database-db2.11316173053230.systemupdate/console.log', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/database-db2.11316173053230.systemupdate/trace.log', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/ffdc.log.0', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/trace.log.0', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/database-db2.11316173053230.MONITORING/console.log', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/database-db2.11316173053230.MONITORING/trace.log', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/install/console.log', 
'/opt/IBM/maestro/agent/usr/servers/database-db2.11316173053230/logs/install/trace.log', 
   '/0config/0config.log'], 
   'OS': ['/var/log/cron', 
   '/var/log/acpid', 
   '/var/log/wtmp', 
   '/var/log/secure', 
   '/var/log/brcm-iscsi.log', 
   '/var/log/maillog', 
   '/var/log/mcelog', 
   '/var/log/messages', 
   '/var/log/spooler', 
   '/var/log/yum.log', 
   '/var/log/boot.log', 
   '/var/log/dmesg']
}


Download log content from a database VM specified by Database ID and Virtual Machine ID and Log File Path details.

Example deployer.loggings.download("/home/mylog.log",deployer.virtualapplications.get("d-50dedbbc-a0f4-46ce-8788-8c09fe51f096"),"database-db2.11316173053230","/home/db2inst1/sqllib/db2dump/db2diag.log")
Output No specific result is returned, you can check the downloaded log file from your <specific_file_name>.


Create customized database workload standards

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Create a new customized database workload standard.

Example deployer.dbworkloads.create({"rate":"3","workload_type":"Departmental OLTP","initial_disk_size":"1","name":"dbwl1","description":"the first one","is_system":"false","version":"1.1.0.1","workload_file":"customized_oltp.zip"})

workload_type: Value must be .Departmental OLTP. or .Dynamic Data Mart.

initial_disk_size: Value must be from 0 to 500

rate: Value must be from 1 to 3

is_system: Value must be .false.

workload_file: File must be a.zip file and the file name can not start with numbers or an underscore

Output

{
'version': '1.1.0.1', 
'initial_disk_size': '1', 
'description': 'the first one', 
'rate': '3', 
'workload_type': 'Departmental OLTP', 
'id': '121b79d0-9faf-457c-9bda-67b4864c115d',
 'is_system': 'false', 'name': '
dbwl1', 
'workload_file': 'customized_oltp.zip'
}


Upload .zip files for customized database workload standards

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

Note: This step is required to create a customized workload standard.


Upload the .zip file.

Example deployer.dbworkloads.get("121b79d0-9faf-457c-9bda-67b4864c115d").workloadfiles.upload("/root/deployer.cli/bin/customized_oltp.zip")
Output

{
'filename': 'customized_oltp.zip',
'success': 'true'
}
The file name must be the same as the value of "workload_file" in the meta json file.


List database workload standards

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


List all database workload standards.

Example deployer.dbworkloads.list()
Output

[
   {
      "rate": "3",
      "workload_type": "Departmental OLTP",
      "workload_file": "departmental_OLTP.zip",
      "is_system": "true",
      "version": "1.1.0.1",
      "initial_disk_size": "1",
      "name": "Departmental Transactional",
      "id": "departmental_OLTP",
      "description": "For databases primarily used for online transaction processing (OLTP). The database will be optimized for transactional applications."
   },
   {
      "rate": "3",
      "workload_type": "Dynamic Data Mart",
      "workload_file": "dynamic_datamart.zip",
      "is_system": "true",
      "version": "1.1.0.1",
      "initial_disk_size": "1",
      "name": "Data Mart",
      "id": "dynamic_datamart",
      "description": "For databases primarily used for data warehousing. The database will be optimized for reporting applications."
   },
   {
      "rate": "3",
      "workload_type": "Departmental OLTP",
      "workload_file": " customized_oltp.zip",
      "is_system": "false",
      "version": "1.1.0.1",
      "initial_disk_size": "1",
      "name": "dwl1",
      "id":"121b79d0-9faf-457c-9bda-67b4864c115d",
      "description": "the first one"
   }
]


Update customized database workload standards

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

Note: The two default database workload standards, Departmental Transactional and Data Mart, can not be updated.


Update a customized database workload standard.

Example
deployer.dbworkloads.get("121b79d0-9faf-457c-9bda-67b4864c115d").update( {"version": "1.1.0.1", "initial_disk_size": "1", "description": "the first one", "rate": "2", "workload_type": "Departmental OLTP", "id": "121b79d0-9faf-457c-9bda-67b4864c115d", "is_system": "false", "name": "dbwl1", "workload_file":"customized_oltp.zip"})

workload_type: Value must be .Departmental OLTP. or .Dynamic Data Mart.

initial_disk_size: Value must be from 0 to 500

rate: Value must be from 1 to 3

is_system: Value must be .false.

workload_file: File must be a.zip file and the file name can not start with numbers or an underscore

Output No output. Get the details of the updated workload standard to verify.


Download .zip files for database workload standards

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Download the .zip file of a customized database workload standard.

Example deployer.dbworkloads.get("121b79d0-9faf-457c-9bda-67b4864c115d").workloadfiles.download("customized_oltp.zip","/root/deployer.cli/bin/a.zip")
Output No output. Check the local file path.


Download the .zip file of a default database workload standard.

Example deployer.dbworkloads.get("departmental_OLTP").workloadfiles.download("departmental_OLTP.zip","/root/deployer.cli/bin/a.zip")
Output No output. Check the local file path.


Delete customized database workload standards

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

Note: The two default database workload standards, Departmental Transactional and Data Mart, can not be deleted.


Delete a customized database workload standard.

Example deployer.dbworkloads.get("121b79d0-9faf-457c-9bda-67b4864c115d").delete()
Output No output. Check the result by listing all database workload standards.


Create database users

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Create a new user.

Example deployer.virtualapplications.get("d-0a70a575-4bac-420e-8c7a-e663d2448e10").operations.create( {"role":"database-db2.DB2","type":"createDB2User","parameters": {"sshAccess":"Deny","userName":"newuser","password":"123456","authLevel":["SYSADM","SYSCTRL","SYSMAINT","SYSMON"]},"groups": {}})
Output

{
  "operation_id": "o-cc986b72-a38c-47a7-9940-61f0116e0e40",
  "parameters": (nested object),
  "result": (nested object),
  "role": "database-db2.DB2",
  "virtualapplication": (nested object)
}


Change database user passwords

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Change the password of a specified user.

Example deployer.virtualapplications.get("d-0a70a575-4bac-420e-8c7a-e663d 2448e10").operations.create( {"role":"database-db2.DB2","type":"resetPassword", "parameters": {"userName":"newuser","newPassword":"<xor>a2pp"},"groups": {} })
Output

    {
 "operation_id": "o-216e22e1-1085-4fae-ad67-b185883e0372",
  "parameters": (nested object),
  "result": (nested object),
  "role": "database-db2.DB2",
  "virtualapplication": (nested object)
}


List database users

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


List all users for a specified database.

Example

    deployer.virtualapplications.get("d-0a70a575-4bac-420e-8c7a-e663d 2448e10").dbusers.getlist()

Output

[{'sshAccess': 'Deny', 'userName': 'new2', 'authLevel': 
''}, {'sshAccess': 'Allow', 'userName': 'user4', 'authLevel': 
'SYSADM,SYSCTRL,SYSMAINT,SYSMON'}, {'sshAccess': 'Deny', 'userName': 
'newuser', 'authLevel': 'SYSADM,SYSCTRL,SYSMAINT,SYSMON'}]


Delete database users

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Delete a database user.

Example

deployer.virtualapplications.get("d-0a70a575-4bac-420e-8c7a-e663d2448e10").operations.create(
    {"role":"database-db2.DB2","type":"userList","globa l":"false","role_type":"DB2","parameters":
    {"userName":"newuser"}})

Output

{
 "operation_id": "o-18967b36-1776-4e66-b732-bdd099d1e794",
  "parameters": (nested object),
  "result": (nested object),
  "role": "database-db2.DB2",
  "virtualapplication": (nested object)
}


Create DB2 fix packs

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

Note: The Create, Upload and Update steps are required to create a fix pack.


Create a DB2 fix pack.

Example deployer.db2fixpacks.create(< python_dictionary_object >)
Output {{'description': 'db2 v97fp6', 'id': 'f552d174-89da-4799-a130-82fe2973d8bd', 'name': 'v97fp6'}


Update DB2 fix packs

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

Note: The Create, Upload and Update steps are required to create a fix pack.


Update a package file by DB2 fix pack package name.

Example

deployer.db2fixpacks.get("f552d174-89da-4799-a130-82fe2973d8bd").update(
    {"name": "v97fp6",
     "description": "db2 v97fp6", 
     "id": "f552d174-89da-4799-a130-82fe2973d8bd",
     "packageFile": "v9.7fp6_linuxx64_server.tgz "})
The uploaded file name must be the same with the value of "packageFile" in the meta json file.
Output There is no output from this command, retrieve the details of the updated DB2 fix pack to verify success.


Upload DB2 fix packs

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

Note: The Create, Upload and Update steps are required to create a fix pack.


Upload a package file by DB2 fix pack package name.

Example deployer.db2fixpacks..upload("/home/alex/deployer.cli.3102/deployer.cli/bin/v9.7fp6_linuxx64_server.tgz")
Output {'file': 'v9.7fp6_linuxx64_server.tgz', 'file_name': 'v9.7fp6_linuxx64_server.tgz', 'fileName': 'v9.7fp6_linuxx64_server.tgz'}


List DB2 fix packs

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


List all DB2 fix packs.

Example deployer. db2fixpacks.list()
Output

[{
  "db2fixpackfiles": (nested object),
  "description": "db2 v97fp7",
  "id": "9b59c1c5-87db-49c5-aea1-db49f9c7cd1d",
  "name": "db2v97fp7",
"packageFile": " v9.7fp7_linuxx64_server.tgz ",
"db2level": "9.7.0.7",
"db2version": "9.7",
"platform": "linuxx64"
}, {
  "db2fixpackfiles": (nested object),
  "description": "db2 v97fp6",
  "id": "f552d174-89da-4799-a130-82fe2973d8bd",
  "name": "v97fp6",
"packageFile": "v9.7fp6_linuxx64_server.tgz",
"db2level": "9.7.0.6",
"db2version": "9.7",
"platform": "linuxx64"
}]


List all fix packs valid for DB2 upgrade.

Example deployer.db2fixpacks.getvalidfixpacks("d-96652bdd-8543-4e51-aaa0-2eda7f582e7c ")
Output

{'label':'name','identifier':'value','items':[
    {'value':'v9.7fp6_linuxx64_server.tgz', 'name': 'V97fp6'}] }


List all DB2 fix packs valid for creating a database pattern or database instance.

Example deployer.db2fixpacks.getfixpacks ()
Sample output
{'label': 'name', 'identifier': 'value', 'items': [ {'platform': 'linuxx64', 'db2 version': '9.7', 'value': 'db2_hybrid_en-9.7.0.5-linuxx64-20120112.tgz', 'name': 'DB2 Version 9.7 Fix Pack 5 for Linux'}, {'platform': 'aix64', 'db2version': '9 .7', 'value': 'db2_hybrid_en-9.7.0.5-aix64-20120112.tgz', 'name': 'DB2 Version 9 .7 Fix Pack 5 for AIX'}, {'platform': 'linuxx64', 'db2version': '10.1', 'value': 'db2_hybrid_en-10.1.0.0-linuxx64-20120312.tgz', 'name': 'DB2 Version 10.1 for Linux'}, {'platform': 'aix64', 'db2version': '10.1', 'value': 'db2_hybrid_en-10.1 .0.0-aix64-20120312.tgz', 'name': 'DB2 Version 10.1 for AIX'}, {'platform': 'lin uxx64', 'db2version': '9.7', 'value': 'v9.7fp6_linuxx64_server.tgz', 'name': ''V97fp6''}]}


Create database patterns that use DB2 fix packs

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.

command:deployer.applications.create("<local_json_file_path>)") 


Create a database pattern that uses a DB2 fix pack.

Example deployer.applications.create("D:\\deployer.cli\\bin\\appmodel-fixpack.json") The content of appmodel-default.json looks like:

    {"model":
    {"nodes":[
    {"attributes":
    {"workloadStandard":"departmental_OLTP",
     "dbDiskSize":10,
     "dbCodeset":"UTF-8",
     "dbCollate":"SYSTEM",
     "sqlType":"DB2",
     "versionName":"V97Linux",
     "fixpackName":"v9.7fp6_linuxx64_server.tgz",
     "source":"defaultworkloadStandardApproach",
     "dbTerritory":"US",
     "purpose":"non-production"},
     "type":"DB2",
     "id":"database"}],
     "version":"1.1",
     "app_type":"database",
     "patterntype":"dbaas",
     "name":"fixPattern",
     "description":"use the db2 fixpack named v9.7fp6_linuxx64_server"}}
Output

{
  "access_rights": (nested object),
  "acl": (nested object),
  "app_id": "a-ab1cfdde-da0a-4cc2-aa86-9ddba1abcb29",
  "app_name": "fixPattern",
  "app_type": "database",
  "artifacts": (nested object),
  "content_md5": "DB9A55836909646ADECAB7551FEAD70F",
  "content_type": "application/json",
  "create_time": "2012-02-28T06:43:37Z",
  "creator": "cbadmin",
  "last_modified": "2012-02-28T06:43:39Z",
  "last_modifier": "cbadmin",
  "patterntype": "dbaas",
  "version": "1.1"
}


Deploy databases from database patterns that use DB2 fix packs

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Deploy a database with a database pattern that uses a DB2 fix pack.

Example

    deployer.applications.get("a-ab1cfdde-da0a-4cc2-aa86-9ddba1abcb29").deploy("fixdb",deployer.clouds[0],None, {"database.dbname":"fixdb"})

Output

{
  "acl": (nested object),
  "app_id": "a-ab1cfdde-da0a-4cc2-aa86-9ddba1abcb29",
  "app_type": "database",
  "appmodel": "https://172.16.65.196:9444/storehouse/user/deployments/d-96652bdd
-8543-4e51-aaa0-2eda7f582e7c/appmodel.json",
  "deployment": "https://172.16.65.196:9444/storehouse/user/deployments/d-96652b
dd-8543-4e51-aaa0-2eda7f582e7c/deployment.json",
  "deployment_name": "fixdb",
  "id": "d-96652bdd-8543-4e51-aaa0-2eda7f582e7c",
  "maintenance_mode": False,
  "operations": (nested object),
  "role_error": False,
  "start_time": "2012-02-28T06:51:30.876Z",
  "status": "LAUNCHING",
  "topology": "https://172.16.65.196:9444/storehouse/user/deployments/d-96652bdd
-8543-4e51-aaa0-2eda7f582e7c/topology.json"
}]


Upgrade databases

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Upgrade a deployed database.

Example deployer.virtualapplications.get("d-96652bdd-8543-4e51-aaa0-2eda7f582e7c").operations.create({"role":"database-db2.DB2"," type":"applyFixpacks", "parameters":{"fixpackName":"v9.7fp6_linuxx64_server.tgz"}})
Output

{
  "operation_id": "o-45616cda-7594-4683-930f-81eccdaf0b44",
  "parameters": (nested object),
  "result": (nested object),
  "role": "database-db2.DB2",
  "virtualapplication": (nested object)
}


Upgrade database application.

Example deployer.virtualapplications.get("d-96652bdd-8543-4e51-aaa0-2eda7f582e7c ").upgrade()
Output True


Delete DB2 fix packs

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


Delete a specified DB2 fix pack.

Example deployer.db2fixpacks.get("f552d174-89da-4799-a130-82fe2973d8bd).delete()
Output There is no output from this command, retrieve the list of all DB2 fix packs to verify if the fix pack has been removed.


Upgrade DB2 pureScale instances

You can use the command-line interface (CLI) in IBM Database Patterns to manage your database patterns, databases, database images, database workloads, pattern types, and system plug-ins.


IBM Data Studio


IBM Data Studio client

The client provides application development and database administration capabilities. Use the client to migrate, create, test, deploy, tune, and manage databases and database applications.

You can use the Data Studio client to complete the following tasks:

Application development

  • Develop pureQuery. applications in a Java. project
  • Develop SQLJ applications in a Java project
  • Use wizards and editors to create, test, debug, and deploy routines, such as stored procedures and user-defined functions
  • Use wizards and editors to develop XML applications
  • Use the SQL builder and the SQL editor to create, edit, and run SQL queries
  • Use the Routine debugger to debug stored procedures
  • Create web services that expose database operations (SQL SELECT and DML statements, XQuery expressions, or calls to stored procedures) to client applications

Database administration

  • Copy and migrate database objects from one database to another or within the same database
  • Create and modify database objects
  • Change configuration settings
  • Grant and revoke security privileges
  • Run database commands and utilities

Query tuning

  • Use Visual Explain to optimize under-performing SQL queries
  • Tune single SQL statements in applications that query DB2 for Linux, UNIX, and Windows databases
  • Tune query workloads in applications that query DB2 for Linux, UNIX, and Windows databases.

For more information about the product, see the IBM Data Studio Information Center.


Install the product from the system

You can download the IBM Data Studio client from the system and then install the product on your workstation.

To download and install the IBM Data Studio client, follow these steps:

  1. Open the Database tools page by clicking: Catalog > Database Tools.
  2. Click Data Studio client to open the Data Studio client page.
  3. Follow the installation instructions that are provided in the Install from the system section.


Install the product from the web

You can download the IBM Data Studio client from the web and install the product on your workstation.

  1. Go to the IBM Data Studio website.
  2. Click Download at no charge.
  3. On the Download tab, follow the instructions to download the Data Studio client.


Start the IBM Data Studio client

You can start the Data Studio client either from a menu option or from the command line.

Use the following options to start the Data Studio client:

On Windows

  • From the Start menu, click Start > Programs > IBM Data Studio > Data Studio version client.
  • From the command line, enter the following command:

    "DS_install_dir\eclipse.exe"
    
    where DS_install_dir is the directory where you installed the client.

    Note: The default installation directory is C:\Program Files\IBM\DSversion\.

    Example

    "C:\Program Files\IBM\DSversion\eclipse.exe"

On Linux

  • You can start the client from the Applications menu. For example, in a GNOME environment, you would select Applications > IBM Data Studio > Data Studio version client.
  • From the command line, enter the following command:

    DS_install_dir/eclipse
    
    where DS_install_dir is the directory where you installed the client.

    Note: The default installation directory is /opt/IBM/DSversion/.

    Example

    /opt/IBM/DSversion/eclipse

Where version is the version of the client you have installed.


Connect to databases

You create a connection to a database that is in the cloud by enabling SSH access for the user and then specifying the database connection information in the IBM Data Studio client.

  1. Find the JDBC URL
  2. Enable password-based SSH access for the DB2 user
  3. Create a connection in the IBM Data Studio client

You can now connect to the database to perform your application development or database administration tasks.


Find the JDBC URL

To create a connection in the IBM Data Studio client, you need the JDBC URL of the DB2 for Linux, UNIX, and Windows database deployed in the cloud.

Find the JDBC URL for the DB2 for Linux, UNIX, and Windows database by using one of the following methods:

Example

The JDBC URL might look like the following example:

jdbc:db2://10.200.150.100:50000/mydb:user=appdba;password=appdba_password;
You can copy the JDBC URL to the clipboard.

You can use the JDBC URL to create a connection to the database.


Enable password-based SSH access

To access Data Studio functionality that requires SSH access, enable password-based SSH access for the DB2 database user that connects to databases in the cloud. You must enable password-based SSH access for the DB2 user that is specified in the JDBC URL of the DB2 for Linux, UNIX, and Windows database deployed in the cloud.

Example

In this example, the following JDBC URL includes the appdba user name:

jdbc:db2://10.200.150.100:50000/mydb:user=appdba;password=appdba_password;

The DB2 user now has password-based SSH access to the database host.


Create a connection in the IBM Data Studio client

To create a connection to a database, you must create a connection profile with the connection information in the JDBC URL.

Before you create a connection to a database that is in the cloud, you must first complete the following tasks:

  1. Find the JDBC URL
  2. Enable password-based SSH access for the DB2 user

  1. In the Data Studio client, switch to the perspective that is appropriate for your role:

    1. Select Windows > Open Perspective > Other.

    2. In the Open Perspective dialog box, select one of the following perspectives:

      • For application development, select Database Development.
      • For database administration, select Database Administration.

  2. Go to the Data Source Explorer view in the Database Development perspective or the Administration Explorer view in the Database Administration perspective.

    Note: If the view is not visible, open it from the Windows menu:

    • To display the Data Source Explorer, click Windows > Show View > Data Source Explorer.
    • To display the Administration Explorer, click Windows > Show View > Administration Explorer.

  3. Start the New Connection wizard:

    • In the Data Source Explorer, click New Connection Profile.
    • In the Administration Explorer, click New > New Connection to a database.

  4. Complete the fields in the wizard:

    1. In the Local tab, select the DB2 for Linux, UNIX, and Windows database manager.

    2. Use the default JDBC driver.

    3. In the General tab of the Properties section, specify the values from the JDBC URL that correspond to the appropriate fields.

      Example

      In this example, the following JDBC URL connects to a DB2 for Linux, UNIX, and Windows database deployed in the cloud:

      jdbc:db2://10.200.150.100:50000/mydb:user=appdba;password=appdba_password;
      
      To create a connection to that database, you would specify the following information in the fields in the New Connection wizard:

      Database

      mydb

      Host

      10.200.150.100

      Port number

      50000

      User name

      appdba

      Password

      appdba_password

    4. Optional: Click Test Connection to ensure that you can connect to the database.

    5. Click Finish.

The connection is displayed in the Database Connections folder in the Data Source Explorer view or in the All Databases folder in the Administration Explorer.


Overview of InfoSphere Optim Query Workload Tuner for DB2 for Linux, UNIX, and Windows

IBM InfoSphere Optim Query Workload Tuner helps database administrators and SQL developers optimize the performance of SQL statements in applications that query DB2 for Linux, UNIX, and Windows databases.

You can tune SQL statements from all supported sources in DB2 for Linux, UNIX, and Windows, including the package cache, packages, and SQL stored procedures. You can also tune SQL statements from the database performance monitor and from text files. You can even type single SQL statements directly into the user interface, called the workflow assistant, and tune them.

By default, IBM InfoSphere Optim Query Workload Tuner is enabled on databases that you deploy in IBM PureApplication System W1500. To tune the SQL statements that query databases in IBM PureApplication System W1500, you use the IBM Query Tuning perspective in the Data Studio client.


Common scenarios for tuning query workloads

InfoSphere Optim Query Workload Tuner supports tuning groups of related SQL statements together. For example, the statements that you tune together might be all of the statements that are in a database application. Tuning this way helps to ensure that changes to database objects in the access plan for a single SQL statement do not adversely affect the access plans of other statements that access those same objects.

Here is a list of the common scenarios for tuning query workloads:

Find and fix changes to access plans

Modify the SQL statements in an application or the environment of that application, and deploying the application can all lead to changes in access plans. So, too, can rebinding packages. With InfoSphere Optim Query Workload Tuner, you can locate changes in multiple access plans and then fix those changes.

Compare multiple access plans by using snapshots of EXPLAIN information

You can compare access plans at the following times:

After running the RUNSTATS utility

You can run the RUNSTATS utility to collect current statistics on tables and indexes. Running this utility provides the optimizer with the most accurate information with which to generate the best access plan. Find out how much the current statistics have improved access plans.

After generating index recommendations

You can find out whether the access plans for a query workload would be improved by recommendations from the Workload Index Advisor.

After testing candidate indexes

You can find out whether access plans for a query workload would be improved by a set of indexes that you tested virtually.

After twice testing candidate indexes virtually

You can compare the differences in the access plans that are produced by two separate virtual tests of candidate indexes.


Features for tuning query workloads

You can use these features when you are tuning query workloads.


Features for tuning single SQL statements

You can use these features when you are tuning single SQL statements.

IBM InfoSphere Optim Query Workload Tuner


Known Restrictions

Restrictions, known issues, and workarounds for the IBM database tools are recorded in technotes on the IBM Support site.

See the IBM PureApplication System W1500: Restrictions, Known Issues, and Workarounds for IBM Database Tools technote to learn more about the restrictions, known issues, and workarounds when you use the IBM database tools with PureApplication System.


IBM Web Application Pattern

IBM Web Application Pattern is a virtual application pattern that is a product extension used to build online web application-style virtual applications.

The Web Application Pattern manages application deployment and life cycle. The product extension sits on top of the IBM PureApplication System W1500. Plug in APIs run within the virtual application pattern to support models, patterns, binaries, and automation. A collection of existing services such as DB2, WebSphere MQ, WAS, and CICS, can be selected for the virtual application pattern allowing a customized environment.

Specifically, the Web Application Pattern provides a set of components that are typical for online web applications, like Java. Platform, Enterprise Edition (Java EE) applications, databases, LDAP servers, and messaging. After building the virtual application in the Virtual Application Builder, you can deploy the application and the system determines the underlying topology configuration.

Web Application Pattern includes an elastically scalable application server, a database, and an elastic caching component. These components are managed together as a single unit, which reduces the management and operational complexity of an end-to-end environment for hosting Java EE web applications. The components are described as follows:

You can start using the Web Application Pattern solution when you accept the license agreement. When the license agreement is accepted the Web Application Pattern is listed in the Solution drop-down menu in the Virtual Application Builder user interface.


Get started with the Web Application Pattern

The IBM Web Application Pattern is a standardized application-centric pattern solution that can be reused to deploy and manage resources in a cloud environment. The Web Application Pattern is delivered with IBM PureApplication System W1500. Contrast with the Virtual System Pattern, which is system-centric.

The IBM Foundation Pattern is a prerequisite to using all other pattern types. By default, there is not a license agreement to accept for the foundation pattern type. However, the foundation pattern type must be enabled before the other pattern types can be enabled. The pattern type status field displays: Disabled or Available. To enable the pattern type, select Enable. After you enable the pattern type, the status is changed to Available. After the foundation pattern is enabled, you must accept the Web Application Pattern license agreement. Then, the administrator must enable the web application pattern type. By enabling the pattern type, the pattern type is made available for use.

Use the Virtual Application Builder to define, create, and deploy the virtual application patterns. For example, rather than installing, configuring, and creating a connection to a specific instance of a database, you can specify the need for a database and provide the associated database schema in the virtual application pattern. The database instance and the connection in the cloud is then created for you by the virtual application pattern.

The first step in using this pattern solution is to create a virtual application template or use an existing virtual application template. The template is then used to build the virtual application pattern. After you select a template and start building, the Virtual Application Builder opens where you can customize the virtual application pattern with cloud component parts and policies. Virtual application patterns can be saved and cloned to build new customized patterns. The following steps are the end-to-end flow of creating a virtual application pattern with a web application pattern type and deploying the virtual application instance to your cloud environment. These topics refer to using the user interface. You can also do most of these tasks with the command-line interface or REST API.

  1. Plan the virtual applications.

  2. Work with the virtual application templates. Create a virtual application template or work with an existing virtual application template. In this step you select the Web Application Pattern Type.

  3. Work with virtual application patterns. Create a virtual application pattern or work with an existing virtual application pattern. The virtual application pattern is built from a virtual application template that has the web application pattern as a foundation.

  4. Edit the virtual application pattern. Choose from several different virtual application parts, including web applications, messaging transaction services, databases, shared services and policies. The web application pattern provides the right topology for the environment that you want these parts and artifacts to serve.

  5. Create virtual application pattern layers.

  6. Deploy the virtual application pattern. After deployment, the virtual application pattern becomes the virtual application instance.

  7. Monitor the web application pattern components in virtual application instances.

  8. View web application pattern logs.

  9. Troubleshoot the web application pattern components in the virtual application instances.


Web Application Pattern prerequisites

Before you use the IBM Web Application Pattern, verify that your hardware and software meet the minimum requirements.

The official set of hardware and software requirements is available on the System Requirements page of the product support site. If there is a conflict between the information provided in the information center and the information on the System Requirements page, the information at the product support site takes precedence.

Each version of Web Application Pattern ships with a specific hypervisor image of IBM WAS.


Product and pattern type versions

PureApplication System version Web Application Pattern version WAS version
1.0.0.0 1.0.0.5 7.0.0.23
1.0.0.0 2.0.0.2 8.0.0.3
1.0.0.1 1.0.0.6 7.0.0.23
1.0.0.1 2.0.0.3 8.0.0.3

There is potential security exposure in the following WAS versions:

For more information and links to a fix that addresses the security exposure, see Flash 1609067.


Hardware requirements


Software requirements


System configuration

A configured NTP server accessible by the virtual machines is required to successfully deploy a virtual application pattern or virtual system pattern. When virtual application patterns or virtual system patterns are deployed, the NTP server is used to establish the system time for the virtual machines. Without a synchronized date and time, problems can occur resulting in incomplete deployments or failure to start virtual application instances or virtual system instances. If an NTP server is not used, the system clocks for the IBM PureApplication System W1500 system and the hypervisors must be synchronized manually. A DNS server must also be configured in the system.


Virtual application pattern ports numbers

Use the list of ports as a guide to setting ports in the IBM PureApplication System W1500 environment. It is important to know the product ports so that you can configure your system firewall to work with PureApplication System.


Appliance and virtual machine ports


Workload Deployer appliance and virtual machine ports

Port Source Destination
TCP 80, 9443, 9444 Any client that accesses PureApplication System appliance PureApplication System appliance
TCP 22, 8887, 8888, 9999, 20000, UPD 1000 Virtual machine to virtual machine communication.

The virtual machines communicate with each other from within the cloud.

Provisioned virtual machine (any type)
TCP 9443, 9444 Provisioned virtual machine (any type) PureApplication System appliance
TCP 9080, 9443 Any client accessing a web application running on IBM WAS. Provisioned virtual machine (WAS)
TCP 50000 WAS virtual machine Provisioned virtual machine (DB2)
TCP 4553, 50010 Any OPTIM client that is accessing DB2 for debugging. Provisioned virtual machine (DB2)
TCP 4554, 4555, 50010 DB2 virtual machine connecting to an OPTIM client for debugging. Provisioned virtual machine (DB2)
TCP 12100 (Tivoli Directory Server administrative port) Any client accessing Tivoli Directory Server administrator Provisioned virtual machine (Tivoli Directory Server)
TCP 25 Database Performance Monitor sending email alert notifications Email clients
TCP 55000, 55001 Any client accessing Database Performance Monitor Database Performance Monitor web server


Monitor ports


PureApplication System ports used for monitoring virtual application patterns

Port Direction
TCP 22, 25, 162, 50000, 55000 Inbound and outbound
TCP 443, 10001, 11080, 11081, 11086, 11087, 15200, 15211 Inbound


IBM Image Construction and Composition Tool ports


PureApplication System ports used for IBM Image Construction and Composition Tool

Port Direction
TCP 22, 80, 443 Inbound and outbound


Log ports


PureApplication System ports used for logging virtual application patterns

Port Direction
TCP 22, 873 Outbound


Application Pattern Type for Java. ports


PureApplication System ports used by the Application Pattern Type for Java

Port Direction
TCP 1972, 35535, 7777 and any ports used by your Java applications

Note:

  • TCP 1972 and 35535 are used if Health Center monitoring is enabled.
  • TCP 7777 is used if debugging is enabled.

Inbound


Web Application Pattern ports


PureApplication System ports used by the Web Application Pattern

Port Direction
TCP 1972, 7777, 9080, 9443, 12100, 35535 and any ports used to connect to remote components

Note:

  • TCP 1972 is used if monitoring is enabled.
  • TCP 7777 is used if debugging is enabled.

Inbound


Plan the virtual application

Create a virtual application consists of using the Virtual Application Builder to define and create the virtual application templates and patterns, and then deploying the pattern in the cloud environment. A deployed virtual application pattern is a virtual application instance.


First steps

Gather the application artifacts and necessary information to configure the cloud components.

Gather the necessary information to configure the quality of service (QoS) policies.

After you collect the necessary information and artifacts previously described, you can begin using the Virtual Application Builder to design and configure the cloud components and policies to create the virtual applications.


Limitations

If you plan to develop a customized plug-ins to extend the functionality in IBM Web Application Pattern with additional configuration or update code, the following configurations are not supported:

Your plug-in code can include additional configuration changes within application servers as long as they do not conflict with the configuration changes made by Web Application Pattern. For example, you cannot delete a data source created by Web Application Pattern from your customized plug-in.


Upgrade the Web Application Pattern

Before patterns can be updated, you must upgrade PureApplication System with the latest system update maintenance. IBM Foundation Pattern must be updated before any other patterns can be updated. This task includes information to apply the fix pack to the web applications pattern, database applications pattern, and foundation pattern.

  1. Access IBM Web Application Pattern on PureSystems Centre

  2. Download the latest versions of the IBM Web Application Pattern, IBM Database Patterns, and IBM Foundation Pattern *.tgz files to the /tmp directory on a Linux or UNIX system.

  3. Log into the console using http://applianceIP.

  4. Download the command line tool. You can download the command line tool from the Welcome page. Click Command Line Tool in the upper right corner of the welcome page. Download the command line tool to the /tmp directory.

  5. Extract the command line tool with unzip command:

    #cd /tmp
    #unzip deployer.cli*.zip
    

  6. Log in to the appliance:

    #cd deployer.cli/bin
    #./deployer -h Appliance_IP -u Admin_ID -p Admin_PW
    
    where

    • Appliance_IP is the IP address/host name of your appliance
    • Admin_ID is the administrator ID of your appliance
    • Admin_PW is the administrator password of your appliance

  7. The command line tool displays the following prompts:

    Welcome to the IBM Workload Deployer CLI. Enter 'help' if you need help getting started.
    >>>
    

  8. Run the following command to update the foundation pattern type if it has not yet been installed. Ensure that you specify the correct location of the foundation-x.x.x.x.tgz file, or the command will fail. Replace x.x.x.x with the appropriate version number.

    1. Type the following command:

      >>>deployer.patterntypes.create("/tmp/foundation-x.x.x.x.tgz")
      
      and click Enter to submit the command. The following information should display:

      {
        "description": "DESCRIPTION".
        "name": "NAME".
        "shortname": "foundation".
        "status": "avail".
        "version": "x.x.x.x"
      }
       
      >>>exit
      

  9. Run the following command to update the web application pattern. Make sure that you have specified the correct location of the webapp-x.x.x.x.tgz file, or the command fails. Replace x.x.x.x with the appropriate version number.

    1. Type the following command:

      >>>deployer.patterntypes.create("/tmp/webapp-x.x.x.x.tgz")
      
      and click Enter to submit the command. The following information should display:

      {
        "description": "DESCRIPTION".
        "name": "NAME".
        "shortname": "webapp".
        "status": "avail".
        "version": "x.x.x.x"
      }
       
      >>>exit
      

  10. Run the following commands to update the database pattern type. Ensure that you have specified the correct location of the dbaas-x.x.x.x.tgz file or the command will fail. Replace x.x.x.x with the appropriate version number.

    1. Type the following command:

      >>>deployer.patterntypes.create("/tmp/dbaas-x.x.x.x.tgz ")
      
      and click Enter to submit the command. The following information should display:

      {
        "description": "DESCRIPTION".
        "name": "NAME".
        "shortname": "dbaas".
        "status": "avail".
        "version": "x.x.x.x"
      }
       
      >>>exit
      

  11. The version of IBM WebSphere eXtreme Scale included with the Web Application Pattern requires a compatibility fix to use the caching service provided in the IBM Foundation pattern type.

You have upgraded Web Application Pattern.

You can verify that the web application pattern is updated by viewing the pattern types in the user interface. Click Cloud > Pattern Types. The pattern types are listed with the new version number.


Apply a software update to a web application

You can apply a IBM WAS software update directly to a virtual application based on the IBM Web Application Pattern. You can add the fix to the virtual application pattern before deployment, or you can apply it to a deployed virtual application instance.

  1. Download the appropriate fix from Fix Central.

  2. Add the fix to the catalog.

    1. To open the workload console, click the Workload Console tab at the top of the Welcome page.
    2. Click Catalog > Emergency Fixes.
    3. From the left panel of the Emergency Fixes window, click the add icon to add an emergency fix to the catalog.
    4. Enter the name of the fix, and optionally enter a description.
    5. Click OK. A new panel displays showing additional fields required for adding a new fix to the catalog.
    6. Click Browse... and select the fix that you downloaded. Click OK.
    7. Click Upload to import the file.
    8. In the "Applicable to" section, click Add more.... Select the WAS plug-ins to which the interim fix applies.
    9. Click Refresh to confirm that the file appears in the list of emergency fixes.

  3. To apply the fix to a new virtual application deployment, edit the virtual application pattern to include the fix.

    1. In Virtual Application Builder, select the Enterprise Application or Web Application component on the canvas. The properties panel for the component is displayed.
    2. In the properties panel, click Select under Interim fixes URL.
    3. Select the fix that you added to the catalog, as well as any other fixes to apply.
    4. By default, the Ignore inapplicable ifix updates check box is selected. When enabled, the interim fix installation exit code is ignored if the selected fixes are already installed, or if the fixes do not apply to the installed operating system or operating system architecture. Disable the check box to always report the exit code when installing the selected fixes.
    5. Save your changes.

    During deployment of the virtual application, the fix is applied.

  4. To apply the fix to an existing deployment, update the virtual application instance.

    1. To open the workload console, click the Workload Console tab at the top of the Welcome page.
    2. Click Click Instances > Virtual Application Instances.
    3. Select the instance to update and then click Manage on the toolbar. The Virtual Application Console opens in a new window.
    4. Click the Operation tab.
    5. Click the WAS component.
    6. In the Fundamental section, expand Update configuration and locate the Interim Fixes option.
    7. Click Select next to Interim Fixes.
    8. Select the fix that you added to the catalog, as well as any other fixes to apply.
    9. Click Submit.
    10. Check the progress of the update in the Operation Execution Results section to verify that the fix was applied successfully.

You have applied the software update.


View the Web Application Pattern type

The IBM Web Application Pattern is a pattern type shipped with IBM PureApplication System W1500. You can view information about the pattern type in the user interface by following these steps:

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Click Cloud > Pattern Types. The Pattern Types pane displays and the pattern types are listed.

  3. Select Web Application Pattern Type. The pattern details display on the right.

  4. View the details of the pattern type, including:

    • Name: Name of the pattern type.
    • Description: Description of the pattern type.
    • License agreement: Specifies if the license agreement is accepted.
    • Status Specifies the status of the pattern type: Disabled or Available. To enable the pattern type, select Enable. After you enable the pattern type, the status is changed to Available.
    • Belongings: Specifies the plug-ins that are associated with this pattern type. Click show me all plug-ins in this pattern type and the System Plug-ins pane displays.

You have viewed the web application pattern type details.


View the plug-ins in the Web Application Pattern type

You can view the plug-ins included in the IBM Web Application Pattern. You can view information about the pattern type in the user interface by following these steps:

  1. Click Cloud > Pattern Types. The Pattern Types pane displays and the pattern types are listed.
  2. Select Web Application Pattern Type. The pattern details display on the right.
  3. Click show me all plug-ins in this pattern type and the System Plug-ins palette displays. The plug-ins are listed in the System plug-ins palette.

You have viewed the system plug-ins that are contained in the web application pattern type.


Related tasks:


Work with deployed Web applications

After you have deployed a virtual application pattern, you can monitor and administer the deployed application directly from the Virtual Application Console.


Update user passwords

From the Virtual Application Console, you can update passwords for a virtual application instance. Your application must be deployed and all of the virtual machines started before you can change configuration. You can change the following passwords for deployed components of the IBM Web Application Pattern.

DB2

The application user and DBA.

IBM WAS

The administrator user password.

IBM Tivoli Directory Server

The password for the IBM Tivoli Directory Server instance and admin DN.

Note: For a new IBM Tivoli Directory Server deployment using the Web Application Pattern, the following user accounts are used:
Password Can be updated from Virtual Application Console
For the instance idsldap Yes.
For the admin DN cn=root Yes.
For Web Administration Tool console administrator: superadmin No. You must use the Tivoli Directory Server Web Administration Tool

The first time that you log on to the Web Administration Tool, you must log on as the Web Administration Tool administrator using the user ID superadmin and the password secret.

For more information about using the Web Administration Tool, see the documentation in the Tivoli Directory Server information center.

  1. Click Instances > Virtual Application Instances. The Virtual Application Instances pane displays.
  2. Select a virtual_application_instance . The virtual application instance details display to the right.
  3. Click Manage on the toolbar.
  4. Click Operation. The list of components you can work with is displayed.
  5. Click the component you want to work with: DB2, WAS, or TDS.
  6. Expand the Update configuration section.
  7. Enter the new password and click Submit.

The password is changed.


Back up databases manually

You can create a full online backup of a deployed database to the IBM Tivoli Storage Manager (TSM) server.

To use TSM for creating backup images of a database, you must configure TSM prior to deploying that database. To create backup images of databases deployed before TSM was configured, contact the Operations group.

When TSM is configured, the backup scheduler automatically performs a daily database backup. However, you have the option of supplementing the automated database backup feature by performing a manual back up operation. Manually created backup images are never overwritten.

The time stamp is a 14-character representation of the date and time when you performed the backup operation. The time stamp is in the format yyyymmddhhnnss, where: yyyy represents the year, mm represents the month (01 to 12), dd represents the day of the month (01 to 31), hh represents the hour (00 to 23), nn represents the minutes (00 to 59), ss represents the seconds (00 to 59).

To manually create a backup image of a database:

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.
  2. Click Instances > Virtual Application Instances. The Virtual Application Instances pane displays.
  3. Select a virtual application instance. The virtual application instance details display to the right.
  4. Click Manage on the toolbar.
  5. Click Operation. The list of components you can work with is displayed.
  6. Select the DB2 component.
  7. Expand Create a database image in the panel on the right.
  8. Specify a unique name for Image Name.
  9. Optional: Specify identifying details in Image Description.
  10. Click Submit. If you have not yet configured TSM, the process will fail at this point.
  11. Click Refresh above operation list at the bottom of the page to update the status of this operation as it processes. Completed backups will display "Success" in the Result field. Failed backups will display an error code.
  12. Expand List all database images to view current backup images.

The Operations group can manually restore a database on a virtual machine.


Monitor the Web Application Pattern components

This topic discusses the middleware monitoring metrics added by specific components. Your applications must be deployed and all of the virtual machines started before you can monitor results.

Use the user interface to monitor the following statistics for your deployed virtual machines.

Use the user interface to monitor middleware:

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.
  2. Click Instances > Virtual Application Instances. The Virtual Application Instances pane displays.
  3. Select a virtual application instance. The virtual application instance details display to the right.
  4. Click Manage on the toolbar.
  5. Click Monitoring > Middleware. The Role Monitoring pane displays. The WAS WebApplications Request Count, WAS WebApplications Service Time(ms), WAS TransactionManager charts display.

You have viewed and monitored the middleware components included in a virtual application instance.


View virtual application instance logs

You can view logs of the virtual application instances. Your virtual application patterns must be deployed and all of the virtual machines started before you can monitor results.

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Click Instances > Virtual Application Instances. The Virtual Application Instances pane displays with the virtual application instances listed.

  3. Select a virtual application instance. The details display to the right.

  4. Click Log, located under the VM Status column to view the virtual machine status logs. The Log Viewer pane displays with organized log sections, such as operating system log, pattern type plug-in log and agent log.

    The following types of logs can be viewed in the Log Viewer:

    • LDAP logs:

      • /home/idsldap/sqllib/log
      • db2dump
      • db2dump/events
      • db2dump/stmmlog files
      • /home/idsldap/idsslapd-idsldap/etc and logs
      • /var/idsldap/V6.3 log files

    • WAS (WAS) logs:

      • logs/server1 files
      • logs/ffdc files

  5. You can also view the log files in the Virtual Application Console. After you select the virtual application instance and the details display to the right, you can click Manage on the toolbar. Select the virtual machine for which you want to view logs. Click Logging on the dashboard.

  6. Expand each section to view the logs.

  7. Optional: Download the log file. After you expand the log type and select a log, you can click the green arrow to download the log. When you use some versions of the Internet Explorer Web browser, you might receive the following error message:Unable to download. Internet Explorer was unable to open this site. The requested site is either unavailable or cannot be found. Please try again later. This is a Web browser limitation. A workaround for several versions of the Internet Explorer Web browser is provided on the Microsoft support site:

    http://support.microsoft.com/kb/323308

You have viewed the logs associated the virtual application instances and the virtual machines that they run on.


Troubleshoot and support for the Web Application Pattern

IBM PureApplication System W1500 provides two debug options for pattern types: Secure Shell (SSH) and log viewer.

You must be assigned Workload resources administration, full permissions to perform the troubleshooting steps.


Web Application Pattern extra troubleshooting capabilities:

IBM Web Application Pattern includes a logging policy, which enables IBM WAS startup tracing and notifies the log viewer about the trace log generated. If this policy is not applied before the deployment, the web application pattern does not provide any extra debug capabilities. For more information about logging policy, see the topic, Log policy.


Virtual machine recovery rules

Persistent virtual machines, such as IBM DB2, IBM Tivoli Directory Server, shared proxy, and caching

Non-persistent virtual machines (WAS)


WAS failover

After the virtual application is in a RUNNING status, the system checks every 5 minutes to determine whether the WAS process is alive.

If the WAS process stops unintentionally, meaning that it was not stopped by a user, the system restarts the WAS process automatically. If this restart also fails, the Middleware status for the WAS process is set to Recoverable Failure in the Middleware section of the virtual application instance page.

If this situation occurs, examine the log files to try to determine the cause of the failure and you can restart the WAS process manually from the Operations tab of the Virtual Application Console.

To access this console, click Manage in the toolbar on the virtual application instance page.


WAS role troubleshooting options

To configure troubleshooting options:

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.
  2. Click Instances > Virtual Applications.
  3. Select a virtual application instance. The virtual application instance details display to the right.
  4. Click Manage on the toolbar. The Virtual Application Console is displayed in a new window.
  5. Click Operation. The list of components you can work with is displayed.
  6. Select the WAS component (WAS).
  7. Configure the appropriate options:

    • Set WAS trace level dynamically: This operation sets the trace level of the WAS, for example, *=INFO:com.ibm.websphere.*=FINEST. You can use trace logs to assist you in monitoring system performance and diagnosing problems. This setting is dynamic and does not restart the server. Use the update configuration if you need a server startup trace. The following operations are available from the deployment inlet:
    • Generate javacore: This operation generates two Java core dumps of the WAS. The dumps are generated one minute apart to aid in troubleshooting.
    • Generate a Heap Dump for memory analysis: Generate a Heap Dump for analysis of Out of Memory or other memory related conditions.
    • Generate a System Dump for detailed process analysis: Generate a System Dump for analysis of the process.
    • Get logs: Get logs on server.
    • Install WAS Updates: Select and submit the URL to use for obtaining interim fixes for WAS.


Troubleshoot database connection issues

This topic describes how to recover from a database connection problem for a Web application based on the IBM Web Application Pattern pattern type.

Database connections to virtual applications made through the wasdb plug-in are tested in the following situations:

The check occurs when the WAS role status for IBM WAS transitions from CONFIGURING to RUNNING.

If the test of these connections is successful, the WAS role changes to RUNNING status, and the virtual application is operational. If a connection tests fails, the WAS role changes to FAILED status. The connection failure can be due to invalid configuration values for the connection such as an incorrect port number, or it can be due to other factors such as transient network connectivity failure to the database.

Perform the following steps to troubleshoot the issue:

  1. Check the trace logs to identify the source of the failure for the WAS role.
  2. If the cause of the problem is a configuration error, you can change the configuration in the user interface. If the cause of the problem is a connectivity issue, proceed to the next step.
  3. Restart the entire virtual application or restart all WAS nodes.

    1. Click Instances > Virtual Applications..
    2. Select the virtual application instance.
    3. To restart the entire virtual application instance, click Stop on the toolbar to stop the instance. When the instance is stopped, click Start to restart the instance.
    4. To restart WAS nodes only, click Maintain on the toolbar to put the instance in maintenance mode. For each WAS node listed in the Virtual Machines section of the detail pane, click Actions > Stop to stop the node, and then Actions > Start to restart the node.

      Note: You must use the stop and start commands from the Action menu to restart the node. The commands to restart WAS on the Operation tab in the Virtual Application Console do not enable you to recover from the problem described in this topic.


The Application Pattern Type for Java

The Application Pattern Type for Java. is a virtual application pattern type that you can use to build Java applications. This pattern type provides an easy and fast mechanism for provisioning Java applications.

The Application Pattern Type for Java manages Java application deployment and lifecycle. This product extension sits on top of the IBM PureApplication System. Plug-in APIs run within the virtual application pattern to support models, patterns, binary files, and automation. A collection of additional components allows connections to network resources, such as databases or web services, to deploy additional files and to enable monitoring of log files. After building the virtual application in the Virtual Application Builder, you can deploy the application, and the system determines the underlying topology configuration.

The Application Pattern Type for Java provides an instance of IBM 64-bit SDK Java Technology Edition Version 7. By using the pattern, you can bundle an existing Java application with all of the resources it requires as a compressed archive file, and deploy it into the cloud. In addition to providing the Java runtime environment, the Application Pattern Type for Java provides some pre-configuration, which implements best practices for performance and monitoring. For example:


Get started with the Application Pattern Type for Java

The Application Pattern Type for Java. is a standardized application-centric pattern solution designed to enable the deployment and management of resources in a cloud environment. The Application Pattern Type for Java is delivered with IBM PureApplication System, but you must first accept the license agreement.

You use the Virtual Application Builder to create a virtual application template, or you can use an existing virtual application template. You then use the template to build the virtual application pattern. When you select a template and start building, the Virtual Application Builder opens where you can customize the virtual application pattern with component parts and policies. You can save and clone virtual application patterns to build new customized patterns. When your pattern is complete, you can deploy it into the cloud as a virtual application instance.

The following steps describe how to create a virtual application pattern with the Application Pattern Type for Java, then deploy the virtual application pattern to your cloud environment. Use the related information links at the end of the topic for more detailed information about completing these steps by using the user interface. You can also do most of these tasks with the command-line interface or the REST API.

  1. Create a virtual application template that uses the Application Pattern Type for Java. You can also work with an existing virtual application template.
  2. Create a virtual application pattern from the template. The use of patterns is optional; instead of creating a pattern, you can deploy a virtual application directly from a template, specifying any required settings at deployment time.
  3. If you are using a virtual application pattern, edit it to suit your requirements. Choose from several different virtual application parts, including Java applications, additional files, log monitoring, and network connections. The Application Pattern Type for Java provides the correct topology for the environment that you want these parts and artifacts to serve.
  4. Optional: Create virtual application pattern layers to group application components together.
  5. Deploy the virtual application pattern. After deployment, the virtual application pattern becomes a virtual application instance.
  6. Monitor the Application Pattern Type for Java components in the virtual application instance.
  7. View virtual application instance logs.
  8. Troubleshoot the virtual application.


What's new for the Application Pattern Type for Java

The following updates and changes are included in this release.


Application Pattern Type for Java prerequisites

Before you use the Application Pattern Type for Java., verify that your hardware and software meet the minimum requirements.

The official set of hardware and software requirements is available on the System Requirements page of the product support site. If there is a conflict between the information provided in the information center and the information on the System requirements page, the information on the System Requirements page takes precedence.


Software requirements


Application Pattern Type for Java restrictions and limitations

Known restrictions and limitations might affect the virtual application instance. Currently, the only limitation is the use of the IBM_JAVA_OPTIONS environment variable.

The IBM_JAVA_OPTIONS environment variable is used as part of the management of the Java. application lifecycle. If you use this variable in your scripts, you must therefore preserve its existing contents. Add your required options to the existing variable. For example, use the following command to add new options opt1 and opt2:

export IBM_JAVA_OPTIONS="opt1 opt2 $IBM_JAVA_OPTIONS"


Monitor Java applications

While your application is running, you can monitor its performance and use of resources. For example, you can view the heap usage, the number of loaded classes, or garbage collection activity. The information is displayed as graphs, in the following categories:

Class loading

  • Number of classes loaded

Copy garbage collector

  • Number of compactions and collections
  • Time spent in collection
  • Memory usage

Mark sweep compact garbage collector

  • Number of compactions and collections
  • Time spent in collection
  • Memory usage

Memory

  • Heap memory usage
  • Non heap memory usage
  • Heap size
  • Shared class cache size
  • Physical memory usage

Operating system

  • Memory usage
  • Process CPU time

Threading

  • Thread count

Some of these categories, such as thread count, apply to the application. Other categories, such as heap usage, apply to the JVM.

  1. Click Instances > Virtual Applications to show the list of deployed applications, then click the application to monitor.
  2. Click Manage, on the toolbar, to open the virtual application console.
  3. In the virtual application console, click Monitoring > Middleware, to display the graphs.

If you want more detailed information, you can monitor your application by using a separate tool, IBM Monitoring and Diagnostic Tools for Java. - Health Center.


Monitor your Java application with Health Center

In addition to the monitoring of the virtual machine resources that is provided by IBM PureApplication System, you can also monitor your Java. virtual application by using IBM Monitoring and Diagnostic Tools for Java - Health Center.

You can use Health Center to monitor various aspects of your deployed Java application. For example:

Health Center also offers you some limited control of the application:

  1. If you enabled the Health Center agent by adding a JVM Policy to the virtual application, go to the final step, otherwise click Instances > Virtual Applications. The Virtual Application Instances palette opens.

  2. Select a virtual application instance. The virtual application instance details are displayed.

  3. Click Manage on the toolbar.

  4. Click Operation. The Operations palette opens.

  5. Select the Java row and expand the Attach Health Center entry.

  6. Specify a number for the Health Center port. The Health Center agent uses this port to listen for remote connections. The default value is 1972.

    Note: This value specifies the first port that the Health Center agent attempts to use. If the agent cannot use the port, it increments the port number and tries again. For more information, see the Health Center documentation in the IBM Monitoring and Diagnostic Tools for Java information center.

  7. Optional: In the Health Center Client (IP or IP/netmask) field, specify the Health Center client address that you will be monitoring from. This setting is used to restrict source access to the Health Center agent port. The value is an IP address, for example, 1.2.3.4, or a combination of an IP address and netmask, for example 1.2.0.0/255.255.0.0. This example combination of IP address and netmask matches anything in the 1.2. network.

  8. Click Submit to attach the Health Center agent to the application.

  9. Start Health Center in the IBM Support Assistant to monitor the application.

See the IBM Monitoring and Diagnostic Tools for Java information center for more information about using Health Center to monitor Java applications.

Health Center page on developerWorks


Virtual application instance logs for the Application Pattern Type for Java

You can view logs of the virtual application instances and the virtual machines that they run on.

New log entries are appended to the Log Viewer when they occur. You can view the following types of logs in the Log Viewer:

See the related information for instructions for using the Log Viewer.


Restart your Java application

If your application completes, either by returning from the main method or by a call to the System.exit(0) method, then the JAVA middleware status is stopped, but the virtual machine (VM) is still running. To restart the application, you must first stop the VM.

  1. Click Instances > Virtual Applications.

  2. Click your application instance, to show the application details. If the application completed successfully, the application status is stopped. The JAVA middleware status in the Virtual machine perspective section is also stopped, but the VM status is running.

  3. On the toolbar, click the Maintain icon, to put the application into maintenance mode.

  4. In the Actions section, click the View link, then click Stop. Wait for the VM to reach the stopped state.

  5. Click the View link again, then click Restart. The VM and the application are both restarted.

    Note: The VM is reused from the previous session. If the application is a web application, the IP address is unchanged. Any files that the application created or modified still exist; if you want any cleanup to be done automatically when the application finishes or restarts, that cleanup must be done by the application.

  6. On the toolbar, click the Resume icon, to exit maintenance mode.


Use Java application templates

The Application Pattern Type for Java. provides some templates for deploying common applications. Use these templates to quickly deploy the following applications: Apache Derby , Apache JMeter, and Apache Tomcat. Download the application to deploy. The templates are set for specific versions of these applications; you can download different versions, but you will have to modify some of the template information:

The templates provide information for you so that you can deploy the application as quickly as possible. You must specify some information before you can deploy the application. For example, the Apache Tomcat template already contains the port that Tomcat listens on, the log directory that Tomcat uses, and the directory that Tomcat automatically retrieves web applications from. You must specify the web application to be hosted by Tomcat, and the Tomcat file that you downloaded earlier.

  1. Click Catalog > Virtual application templates, then select Application Pattern Type for Java 1.0.

  2. Click the template to deploy.

  3. Click Open on the toolbar. The template is opened in the virtual application builder.

  4. Modify the template as required, then save the template. Warning icons indicate properties that are currently missing, and require your input. The following table shows the information that you must add for each template:


    Information required by Java application templates

    Template application Required information
    Apache Derby

    Apache Derby (Java application) > Archive file

    The Apache Derby application file that you downloaded earlier
    Apache JMeter

    Apache JMeter (Java application) > Archive file

    The Apache JMeter application file that you downloaded earlier
    Apache Tomcat

    Apache Tomcat server (Java application) > Archive file

    The Apache Tomcat application file that you downloaded earlier

    Web Application (Additional archive file) > Additional archive file

    The web application to be hosted by Tomcat
    If you chose to download a different version of the Apache application, you must also modify the following template information, because this information contains the version number of the application:


    Information that contains a version number

    Template application Information that contains a version number
    Apache Derby

    Apache Derby (Java application) > Command Line

    The command used to start the application
    Apache JMeter

    Apache JMeter (Java application) > Command Line

    The command used to start the application
    Apache Tomcat

    Apache Tomcat server (Java application) > Command Line

    The command used to start the application

    Web Application (Additional archive file) > Deploy path

    The directory that Tomcat retrieves web applications from

    Apache Tomcat logs (Monitored file) > File

    The full path of the Tomcat log files

  5. In the IBM PureApplication System window, click Deploy to deploy the application template.


Application Pattern Type for Java virtual application components and policy

A virtual application pattern contains components and, optionally, one or more policies. A component represents a service that is required by the virtual application instance. A policy defines how a service is configured during deployment.

Components and policies are defined by plug-ins. Each pattern type has a set of included plug-ins, which determines the components that you can use when you build a pattern by using that pattern type.


Components

Typically, an application contains artifacts, such as a Java. application, additional files, and network connections. You build the virtual application pattern by using components which represent these artifacts. When you deploy the pattern, the system creates instances of services based on the components. The components that you can choose from in the Application Pattern Type for Java are as follows:


Policy

You can attach an optional policy, called "JVM Policy", to the virtual application. Policies configure specific behavior, or define a quality of service level, for the deployed application instance. Use the policy for the Application Pattern Type for Java to control the features and the configuration of the underlying Java virtual machine.


Application components

The following application components are available when you build a virtual application by using Application Pattern Type for Java.: "Java application" and "Additional archive file".


Java application (IBM Java Runtime Version 7)

The Java application component represents an execution service for the Java SE platform. You can use this component to deploy any application that requires a Java runtime environment.

This component has the following attributes:

Name

Name of the Java application.

Archive file

Archive file that contains the Java application to be uploaded. Supported archive file types are .zip, .tar.gz, and .tgz. Required.

Application launch type

Specifies how the Java application is launched. Required. Select from the following options:

Main Class

The Java application is launched by invoking the main() method of the main class. If you choose this option, the following attributes also apply:

Main Class

Class that contains the main() method, used to launch the Java application. This attribute is the fully qualified main class name. For example, com.mycom.myapp.Launcher. Required.when you choose the Main Class launch type.

Program Arguments

Specifies one or more arguments to be passed to the main class.

Classpath

Specifies one or more additional entries to add to the Java class path. Use a colon (:) to separate class path entries.

Command Line

The Java application is launched from the command line. This option is useful if the Java application contains a launch script or a custom launcher. If you specify this attribute, the following attribute also applies:

Command Line

Command, and options, to run to launch the Java application.

The available policy component for this component is Policies. The available incoming connectable component for this component is "Connect in". The available outgoing connectable components are Additional archive file and "Connect out".

Use the property panel to upload the archive file. The archive file should contain the entire Java application and all the resources that the application requires. You do not need to include a Java runtime environment because IBM 64-bit SDK Java Technology Edition Version 7, which is included in the Application Pattern Type for Java, already contains one.

You can specify extra archive files using the "Additional archive file" component.


Additional archive file

You can upload other archive files in addition to your Java application archive file. You can use these archive files to deploy additional resources, such as JDBC drivers or .war files, into an application server, or to overwrite parts of the deployed Java application, such as configuration files.

This component has the following required attributes:

Deploy path

Location that the additional archive file is deployed to.

Type of archive file

Type of the archive file. The options are as follows:

  • ZIP, TAR.GZ, or TGZ file
  • Java archive file (.jar, .ear, or .war files)

Additional archive file

External archive file that contains additional files needed by the Java application component.

Unzip file

Specifies whether the file is extracted. This attribute is available only if you select ZIP, TAR.GZ, or TGZ file as the type of archive file.

The incoming connectable component is "Java application (IBM Java Runtime Version 7)".


Other components

The following other components are available when you build a virtual application by using Application Pattern Type for Java.: "Connect in", "Connect out", and "Monitored file".


"Connect in"

This component is used to open the firewall for inbound TCP connections to a specified port.

This component has the following attributes:

Port

Port on the server that connections can be made to. Required.

Limit range of connecting addresses

Specifies whether there are restrictions on the range of IP addresses that can make connections to the port. Required.

Connect server(s) (IP or IP/netmask)

Server IP or IP netmask for IP addresses that are allowed to make connections to the port.

There are no incoming connectable components. The outgoing connectable component is "Java application (IBM Java Runtime Version 7)".

Connect a "Connect in" component to an application component configures the firewall to allow network connections to be made to the application, on the port specified.


"Connect out"

This component is used to open the firewall for outbound TCP connections from a Java application to a specified host and port.

This component has the following attributes:

Server (IP or IP/netmask)

Target server. Required.

Port

Destination port on the target server. Required.

The incoming connectable component is "Java application (IBM Java Runtime Version 7)". There are no outgoing connectable components.


Monitored file

Use the monitored file component to specify a file, or collection of files, to monitor and be available in the logging view.

This component has the following required attribute:

File

Path and file name to monitor in the logging view. Use the asterisk (*) wildcard to select multiple files.

The incoming connectable component is "Java application (IBM Java Runtime Version 7)". There are no outgoing connectable components.


Policies

The JVM policy is available when you build a virtual application by using Application Pattern Type for Java..


JVM policy

A Java virtual machine (JVM) policy controls the features and the configuration of the underlying JVM. Attach the JVM policy to configure the Java runtime environment. For example, you can enable IBM Monitoring and Diagnostic Tools for Java - Health Center, for monitoring the application, and you can debug the application by using an integrated development environment (IDE) such as IBM Rational Application Developer.

The JVM policy has the following attributes:

Set minimum and maximum JVM heap size (in MB)

Minimum and maximum heap size of the JVM, in megabytes (MB).

Enable debug

Specifies whether the JVM is in debug mode.

Start debug suspended

Specifies whether the JVM is started in suspended mode. The default value is false.

Debug port

Port that the JVM uses to listen for remote connections. The default value is 7777. If you select Enable debug but do not specify a value for the debug port, the default value is used.

Client (IP or IP/netmask)

Specifies an optional address of the debug client. This setting is used to restrict source access to the debug port. The value is an IP address, for example 1.2.3.4, or a combination of an IP address and netmask, for example 1.2.0.0/255.255.0.0. This example combination of IP address and netmask matches anything in the 1.2. network.

Enable Health Center

Specifies whether the JVM is started with the Health Center monitoring agent enabled. The default value is true.

Health Center port

Port that the Health Center agent uses to listen for remote connections. The default value is 1972. If you select Enable Health Center but do not specify a value for the Health Center port, the default value is used.

Note: This value specifies the first port that the Health Center agent attempts to use. If the agent cannot use the port, it increments the port number and tries again. For more information, see the Health Center documentation in the IBM Monitoring and Diagnostic Tools for Java information center.

Health Center Client (IP or IP/netmask)

Specifies an optional address of the Health Center client. This setting is used to restrict source access to the Health Center agent port. The value is an IP address, for example, 1.2.3.4, or a combination of an IP address and netmask, for example 1.2.0.0/255.255.0.0. This example combination of IP address and netmask matches anything in the 1.2. network.

Generic JVM arguments

Any additional JVM configuration arguments.

The applicable component, which you can attach this policy to, is "Java application (IBM Java Runtime Version 7)".

When you enable debugging, the JVM is started in debug mode, and is listening on the specified port. A debug program on any client machine can attach to the JVM by default. You can specify a client IP address, or a combination of IP address and netmask, to restrict access to the JVM. A client IP address, such as 10.2.3.5, allows a specific client machine to debug. An IP address and netmask combination, such as 10.2.3.5/255.255.0.0, allows any machine on the specified network, 10.2 in this example, to attach to the JVM.


Manage virtual application patterns

A virtual application is defined by a virtual application pattern. It is a complete set of platform resources that fulfill a business need, including web applications, databases, user registries, messaging services, and transaction processes. Each virtual application pattern is associated with a pattern type, which is a collection of plug-ins that provide these resources and services for a particular business purpose in the form of components, links and policies. The pattern types, product extensions of the cloud system, and the types of virtual application that you build depend on the pattern types that you have enabled. To create or extend virtual application patterns, you must have the Create new patterns permission or Appliance administration role with full permissions. You must also accept the license agreements for the following pattern types:

Select components, links and policies to create or extend the virtual application in the Virtual Application Builder. You can start with a virtual application template and customize it, clone an existing virtual application pattern and customize it, or create a new virtual application pattern. The virtual applications deployed to the cloud become virtual application instances.


Create virtual application patterns

Create virtual application patterns to model virtual applications that you can deploy to the cloud. You must be granted access to patterns, access to create patterns, or have the Workload resources administration with full permissions to complete this task.

When you create a virtual application, you first select a pattern type such as the IBM Web Application Pattern. The pattern type abstracts the infrastructure and middleware layers for a particular type of workload, such as a web application. You can then create a virtual application pattern by using the components that are associated with the selected pattern type.

Use the Virtual Application Builder to define, create, and deploy the virtual applications. For example, rather than installing, configuring, and creating a connection to a specific instance of a database, you can specify the need for a database and provide the associated database schema in the virtual application pattern. The database instance and the connection in the cloud is then created for you by the virtual application pattern.

Note: When you start the Virtual Application Builder, it opens in a separate window. If you are using Microsoft Internet Explorer V7 or V8, you might need to change your browser settings so the Virtual Application Builder is displayed correctly. Specifically, configure pop-up windows to open in a new tab in the current window. From the browser menu, click Tools > Internet Options. You can use the workload console, the command line interface, or the REST API to complete this task. For the command line and REST API information, see the Related information section.

  1. Click Patterns > Virtual Applications.

  2. Click the New icon on the toolbar.

  3. To build the virtual application pattern:

    1. Select a pattern type from the menu.
    2. Select a virtual application template.
    3. Click Start Building. You created a new virtual application pattern associated with a pattern type. The Virtual Application Builder opens in a new window where you can add components and policies.

  4. On the Virtual Application properties pane, specify the following information:

    Name

    The name of the virtual application pattern.

    Description

    The description of the virtual application pattern. This field is optional.

    Type

    Leave Application selected to create a virtual application pattern or select Template to create a virtual application template used as the basis for creating other virtual application patterns.

    Lock option for plugin usage

    Specify how this virtual application pattern is affected by upgrades to the pattern type or to IBM Foundation Pattern.

    Unlock plugins

    If the pattern type is upgraded, use the latest versions of pattern type plug-ins. If IBM Foundation Pattern is upgraded, use the latest version.

    Lock all plugins

    Do not change the version of plug-ins or the version of the IBM Foundation Pattern associated with this virtual application pattern when an upgrade occurs.

    Lock all plugins except Foundation plugins

    If the pattern type is upgraded, do not change the version of the plug-ins that are associated with this virtual application pattern. If IBM Foundation Pattern is upgraded, use the latest version.

    Note: If you select Lock all plugins or Lock all plugins except Foundation plugins, you can view a list of which plug-ins are locked. Click the Source tab in Virtual Application Builder. The application model source is displayed. Search for the element plugins to view the list.

  5. If you selected a blank template, the canvas is empty and you can start building the virtual application. If you selected a template, customize the virtual application:

    • Drag the components to add to the virtual application pattern onto the canvas.
    • To add policies to the virtual application pattern, click Add policy for application and select a policy or select a component part on the canvas and click the Add a Component Policy icon to add a component-specific policy.
    • To remove parts, click the Remove Component icon in the component part.
    • To edit the connections between the parts, hover over one of the objects until the blue circle turns orange. Select the circle with the left mouse button, drag a connection to the second object until the object is highlighted, and release the mouse button.

  6. Click Save.


Create virtual application layers

You can use the Virtual Application Builder to create virtual application layers. These layers provide a way for you to control the complexity and to reuse virtual applications.

By default, a virtual application consists of one layer when you first create it. When you use application layering, you can modify an existing virtual application by adding separate layers. For example, you can use one virtual application that defines the basic components of one deployment environment to create a different virtual application for other deployment environments by associating the quality of service (QoS) layer with different QoS goals.

One virtual application can contain multiple layers. A layer can contain component types of the virtual application, or the layer can reference another virtual application, which is called a reference layer.

Because there is no predefined set of layers or binding between a component type and a particular layer, you can create layers according to your business goals. However, one component type in a virtual application can be placed in only one layer, but you can move parts between the layers.

Use the Virtual Application Builder to add, delete, edit, disable or enable a layer, move virtual application parts between layers, or import a virtual application as a reference layer.

  1. Click Patterns > Virtual Applications.

  2. Select the virtual application pattern and click Open on the toolbar.

  3. On the Virtual Application Builder pane, expand Layers to view the layers of the virtual application pattern.

  4. Click the Create a new layer icon.

    You can also add a layer, called a reference layer, by importing an existing application. For more information, see the Related tasks section.

  5. Click Save.


Import virtual application patterns

You can import virtual application patterns so that you can edit and deploy the imported application for use in your system.

You must have access to patterns, access to create patterns, or have the Workload resources administration with full permissions to complete this task. You can import virtual application patterns that were previously exported or that were developed with IBM Rational Application Developer.

  1. Click Patterns > Virtual Applications.

  2. Click the Import icon on the toolbar.

  3. Optional: Enter a name for the application in the Name field.

  4. Click Browse to select the application compressed file to import.

    If the file size is larger than 2 GB, use the command-line tool to upload the file.

  5. Review the virtual application pattern to ensure that it works when it is deployed.

    1. Select the imported virtual application pattern and click Open on the toolbar. The virtual application pattern opens.

    2. Check the configuration of all components, links, and policies to ensure that the settings are appropriate. In particular, verify any dependencies on other items. For example:

      • Check for any configured interim fixes that are associated with a IBM WAS component. Remove associated interim fixes that are not required and add interim fixes to include in deployments. Verify that all required interim fixes are available in the list of emergency fixes in the catalog. For more information about interim fixes, see the Related tasks section.
      • Verify other relationships such as reusable components or imported layers.

After you import the virtual application, you can edit the application and deploy it into the system.

IBM PureSystems Centre


Import virtual applications as layers

You can import virtual applications as reference layers of other applications to reuse existing applications.

Use reference layers to reuse existing applications. The contents of a reference layer is read-only. Changes that are made from the referenced application are reflected in the application that references it.

  1. Click Patterns > Virtual Applications.
  2. Select the virtual application pattern and click Open on the toolbar.
  3. On the Virtual Application Builder panel, expand Layers to view the layers of the virtual application pattern.
  4. Click the Import a virtual application icon.
  5. Select a virtual application to reference as a layer and click Add.

After you import the reference layer, you can connect virtual application pattern components in other layers to the reference layer.


Modify virtual application patterns

You can modify a virtual application pattern to add or remove application components, links, and policies.

You must be granted access to the virtual application pattern or have the Workload resources administration with full permissions to complete this task.

Use the Virtual Application Builder in the workload console to edit the virtual application. You can add, edit or remove components, links and policies to virtual applications.

  1. Click Patterns > Virtual Applications.

  2. Select a virtual application pattern and click Open.

  3. Edit the virtual application pattern:

    • Drag the components to add to the virtual application pattern onto the canvas.
    • To add policies to the virtual application pattern, click Add policy for application and select a policy or select a component part on the canvas and click the Add a Component Policy icon to add a component-specific policy.
    • To remove parts, click the Remove Component icon in the component part.
    • To edit the connections between the parts, hover over one of the objects until the blue circle turns orange. Select the circle with the left mouse button, drag a connection to the second object until the object is highlighted, and release the mouse button.

  4. Click Save.


Modify virtual application layers

When you use application layering, you can modify an existing virtual application by adding separate layers. You can modify these virtual application layers as your business needs change.

  1. Click Patterns > Virtual Applications.

  2. Select a virtual application pattern and click the Open icon on the toolbar.

  3. On the Virtual Application Builder, expand Layers to view the layers of the virtual application pattern. Click the layer to view the topographical view on the canvas and to start editing.

  4. Modify the layer as needed:

    • Rename the layer.
    • Add or remove virtual application components.
    • Add or remove virtual application component connections.
    • Add or remove policies.
    • Move components between layers. Click the layer for a component to display a list of the layers in the virtual application. Select a layer to move the component from the previous layer.

  5. Click Save.


Delete virtual application patterns

You can delete virtual application patterns that you no longer want to deploy.

You must have access to the application or have the Workload resources administration with full permissions to complete this task. You can use the workload console, the command line interface, or the REST API to complete this task. For the command line and REST API information, see the Related information section.

  1. Click Patterns > Virtual Applications.
  2. Select a virtual application pattern. The virtual application pattern details display in the right pane.
  3. Click Delete on the toolbar.
  4. Click Confirm to confirm to delete the pattern.


Delete virtual application pattern layers

You can use the Virtual Application Builder in the workload console to delete virtual application pattern layers.

  1. Click Patterns > Virtual Applications.
  2. Select a virtual application pattern and click Open on the toolbar.
  3. On the Virtual Application Builder pane, expand Layers to view the layers of the virtual application pattern.
  4. Select the layer and click Delete the selected layer on the toolbar.


Configure components and policies

You can add components or policies to a virtual application, or view or edit components or policies. This task is done within the Virtual Application Builder.

The Virtual Application Builder makes it simple to model a virtual application by placing components on a canvas, linking them together, and applying relevant policies to components.

  1. Click Patterns > Virtual Applications.

  2. Select a pattern type and then select a virtual application pattern.

  3. Click Open. On the Virtual Application Builder pane, complete one of the following actions:

    • To edit an existing component or policy, click the component or policy part on the canvas.
    • To add a component, select the component from the list of assets, then drag the component to the canvas.
    • After you add a component to the canvas, add a policy by either clicking Add policy for application on the canvas, or by clicking the Add a Component Policy icon in a component part on the canvas.

      You can apply a policy globally at the application level, or apply it to a specific component that supports the policy. When you apply a policy globally, it is applied to all components in the virtual application pattern that support it. If you apply a policy to a specific component and also apply it to the whole virtual application pattern, the configuration of the component-specific policy overrides the application level policy.

In the Virtual Application Builder, you can make connections between objects on the canvas to define dependencies. For more information, see the documentation for the component that you are editing.

Note: You can also view properties for a component outside of the Virtual Application Builder by looking at the information for the associated plug-in. Click Cloud > System plug-ins and search for the plug-in to view.


Plug-ins associated with Application type components

Component Plug-in
Additional archive file (web application) file
Additional archive file (Java application) javafile
Enterprise application was
Existing Web Service Provider Endpoint webservice
Java application java
Policy Set webservice
web application was


Plug-ins associated with Database type components

Component Plug-in
Data Studio web console dswc
Database db2
Existing database (DB2 or Informix) wasdb2
Existing database (Oracle) wasoracle
Existing IMS database imsdb


Plug-ins associated with Messaging type components

Component Plug-in
Existing Messaging Service (WebSphere MQ) wasmqx
Queue (WebSphere MQ) wasmqq
Queue (WebSphere MQ) wasmqt


Plug-ins associated with OSGi type components

Component Plug-in
Existing OSGi bundle repository osgirepo
OSGi application was


Plug-ins associated with Transaction processing type components

Component Plug-in
Existing CICS Transaction Gateway wasctg
Existing Information Management Systems Transaction Manager imstmra


Plug-ins associated with User registry type components

Component Plug-in
Existing User Registry (IBM Tivoli Directory Server or Microsoft Active Directory) wasldap
User Registry tds


Plug-ins associated with Other type components

Component Plug-in
Connect Out connect
Connect In connect
Monitored file filemonitor


Deploy virtual application patterns

After you create a virtual application, you can provision and deploy it to the cloud. You can deploy a virtual application multiple times and each deployment is a running virtual application instance on the cloud infrastructure. You can use the workload console, the command line interface, or the REST API to complete this task. For the command line and REST API information, see the Related information section. Ensure that the virtual application pattern is configured with the required settings. After a virtual application pattern is deployed, changes to the virtual application pattern do not modify existing virtual application instances. You must stop the deployed virtual application instance before you can change it.

When a virtual application is deployed, the Virtual Application Builder allocates necessary resources, such as virtual machines and block storage on the cloud infrastructure, and deploys, configures, and starts the virtual application components in the cloud.

Policies that are associated with the virtual application typically influence how cloud infrastructure resources and virtual application pattern components are allocated for a deployment. For example, a single virtual machine that runs a web application is provisioned when the web application component is deployed. However, a scaling policy that is associated with a web application results in multiple virtual machines, equal to the cluster size that you specify for the scaling policy. These virtual machines are provisioned for the web application, an elastic load balancer cloud component used for routing HTTP requests, and a set of WebSphere application components that facilitate session replication across the cluster members of the web application.

The time that it takes to deploy a virtual application depends on several factors, such as the size of the virtual application pattern parts and the interdependencies of parts in the application definition, network usage, storage usage, and the provisioning speed of the virtual machine on the cloud infrastructure.

Note: Connectivity issues with the DNS server can cause increased deployment times or failed deployments. The network administrator for the target network must check the routing tables of the DNS server to ensure that it can resolve the network address of the system.

You can add SSH key-based access to your workload virtual machine when you deploy the virtual application. This type of security provides better protection than password-based access.

Note: The routing policy is automatically applied to a web application when a proxy shared service is running in the same cloud group it is deploying into. Otherwise, the routing policy is not automatically added to the virtual application.

  1. Click Patterns > Virtual Applications.

  2. Select the virtual application pattern to deploy.

  3. Click Deploy on the toolbar.

  4. Specify the settings for the deployment.

    1. Select IPv4 or IPv6 in the Filter by IP type field.
    2. Select the Filter by profile type from the menu.
    3. Select the Profile from the menu.
    4. Select the Priority from the menu.
    5. Select the Cloud group from the menu.
    6. Select the IP group from the menu.

  5. Expand the Advanced section to configure the advanced settings.

    1. The SSH key provides access to the virtual machines in the cloud group for troubleshooting and maintenance purposes. See the topic, Configuring SSH key-based access, for details about SSH key-based access to virtual machines. Use one of the following options to set the public key:

      • To generate a key automatically, click Generate. Click Download to save the private key file to a secure location. The default name is id_rsa.txt.

        The system does not keep a copy of the private key. If you do not download the private key, you cannot access the virtual machine, unless you generate a new key pair. You can also copy and paste the public key into a text file to save the key. Then, you can reuse the same key pair for another deployment. When you have the private key, make sure that it has the correct permissions (chmod 0400 id_rsa.txt). By default, the SSH client does not use a private key file that provides open permission for all users.

      • To use an existing SSH public key, open the public key file in a text editor and copy and paste it into the SSH Key field.

        Do not use cat, less, or more to copy and paste from a command shell. The copy and paste operation adds spaces to the key that prevent you from accessing the virtual machine.

  6. Click OK.

    When the virtual application is deployed, the virtual application instance is listed under the Instances section of the IBM PureApplication System W1500. To view the virtual instance, click Instances > Virtual Applications.

  7. View the details of the deployed virtual application in the Virtual Application Instances pane. The details include a list of virtual machines that are provisioned on the cloud infrastructure for that deployment, the IP address, virtual machine status, and role status. Role is a unit of function that is performed by the virtual application middleware on a virtual machine.

    The status values are listed in the following table:


    Possible status values for a deployed virtual application

    Status Deployment description Virtual machine description
    LAUNCHING The virtual application is being deployed. The virtual machine is being provisioned on the infrastructure cloud.
    INSTALLING Not applicable The components of the virtual application are being provisioned on the virtual machine.
    RUNNING Resources are being provisioned on the infrastructure cloud. The components of the virtual application are running on the virtual machine and can be accessed.
    TERMINATING The virtual application instance resources are being deleted. The virtual machine is being deleted and its resources are released.
    TERMINATED The virtual application instance resources are deleted. History files are retained. The virtual machine is deleted and resources were released.
    STOPPING The virtual application instance is stopping. The virtual machine is being stopped.
    STOPPED The virtual application instance is stopped. The virtual application can be made available again by starting the instance. The virtual machine is stopped and it can be restarted.
    FAILED The deployment process could not be started because of either the application configuration or a failure that is occurring in the infrastructure cloud. The virtual machine did not start successfully.
    ERROR An error occurred during deployment. Check the logs and determine the cause of the error before you redeploy the virtual application. The virtual machine status.
    NOT_READY The virtual application instance is in maintenance mode. The NOT_READY status does not apply to virtual machines.
    You can also view the virtual machine role health status information. For example, a red check mark is displayed when the amount of processor becomes critical on the virtual machine.

    Click Endpoint to view the endpoint information for a role. For a DB2 deployment, you can have more than one endpoint. For example, an endpoint for the application developer and one for the database administrator. If the elastic load balancer shared service is used by the virtual application instance, the endpoint URL is based on the virtual host name for the elastic load balancer instance. Otherwise, the endpoint URL is based on the IP address of the virtual machine that is associated with the role. For more information, see the Related concepts section.

The virtual application instance is successfully deployed and started. To stop the virtual application instance, select the virtual application from the list, and click Stop. To start the virtual application instance again, select the virtual application click Start

To redeploy a virtual application, select the virtual application from the Virtual Application Patterns panel, and click the Deploy icon in the Virtual Application Builder.

To remove a stopped application, select it from the Virtual Application Patterns panel, and click Delete.

After you deploy the virtual application, you can use the IP address of the virtual machines to access the application artifacts. For example, you can manually enter the URL in your browser.

http://IP_address:9080/tradelite/
IP_address is the IP address of the deployed WAS virtual machine.

If you uploaded an SSH public key during the deployment, you can also connect directly to a virtual machine without a password if you have the private key.

You can also view and monitor statistics for your deployed virtual machines and download and view the log files.


Clone virtual application patterns

Use the workload console to clone an existing virtual application pattern. A clone is an exact copy of an existing virtual application pattern that you can modify as needed.

You must have access to the pattern or have the Workload resources administration with full permissions to complete this task. You can use the workload console, the command line interface, or the REST API to complete this task. For the command line and REST API information, see the Related information section.

  1. Click Patterns > Virtual Applications.
  2. Select a virtual application pattern and click the Clone icon on the toolbar.
  3. Specify the name for the copy of the virtual application pattern and click OK.


Export virtual application patterns

You can export applications from the system catalog so that you can use them on a different system. You must have access to patterns, access to create patterns, or have the Workload resources administration with full permissions to complete this task. You can use the workload console or the command line interface to complete this task. For the command line information, see the Related information section.

  1. Click Patterns > Virtual Applications.
  2. Click Export on the toolbar.
  3. Click Save to save the application compressed file to a local directory.


Enable Image Construction and Composition Tool as a virtual application pattern

A virtual application pattern named ICCT, with the type IBM Image Construction and Composition 1.2, is available for deployment, if you have access to this pattern. Otherwise, you can enable and update IBM Image Construction and Composition Tool as a virtual application pattern in PureApplication System.


Enable Image Construction and Composition Tool for the first time

If you are using IBM Image Construction and Composition Tool for the first time, enable it using the instructions in this task. This is a one-time only task that need be completed initially. After completing this task, Image Construction and Composition Tool is available for use as a virtual application pattern. You must be granted access to patterns, access to create patterns, or have the Workload resources administration with full permissions to complete this task. In this task you are working with Virtual Application Builder, which opens in a separate window when it is started. If you are using Microsoft Internet Explorer V7 or V8, you might need to change your browser settings so the Virtual Application Builder is displayed correctly. From the browser menu, click Tools > Internet Options... and configure pop-up windows to open in a new tab in the current window.

  1. Enable the Foundation Pattern type by clicking...

      Cloud | Pattern Types | Foundation Pattern Type 2.0.0.0 | Enable

  2. Click...

      Cloud | Default Deploy Settings | imported base image | Patterns | Virtual Applications | Image Construction and Composition Tool virtual application pattern | Deploy

  3. When deployment is complete, from the application detail pane, click the Endpoint link.

  4. In the Endpoint information window that is displayed, click the URL to start Image Construction and Composition Tool. This step is the correct way of starting Image Construction and Composition Tool. You cannot start Image Construction and Composition Tool by entering the URL in a browser or bookmarking it for later.

You are ready to use Image Construction and Composition Tool from the PureApplication System user interface.


Enable Image Construction and Composition Tool if the virtual application has been deleted

If you deleted the IBM Image Construction and Composition Tool virtual application pattern, you can create it again by using the steps in this task. After completing this task, Image Construction and Composition Tool is available for use as a virtual application pattern. You must be granted access to patterns, access to create patterns, or have the Workload resources administration with full permissions to complete this task. In this task you are working with Virtual Application Builder, which opens in a separate window when it is started. If you are using Microsoft Internet Explorer V7 or V8, you might need to change your browser settings so the Virtual Application Builder is displayed correctly. From the browser menu, click Tools > Internet Options... and configure pop-up windows to open in a new tab in the current window.

  1. Click Patterns > Virtual Applications.
  2. Click the New icon on the toolbar.
  3. Select IBM Image Construction and Composition 1.2 and click Start Building.
  4. Scroll down, open Other components and drag Image Construction and Composition Tool onto the canvas.
  5. Click Save.
  6. From the Virtual Applications pane, select the Image Construction and Composition Tool virtual application pattern and click Deploy.
  7. When deployment is complete, from the application detail pane, click the Endpoint link.
  8. In the Endpoint information window that is displayed, click the URL to start Image Construction and Composition Tool. This step is the correct way of starting Image Construction and Composition Tool. You cannot start Image Construction and Composition Tool by entering the URL in a browser or bookmarking it for later.

You are ready to use Image Construction and Composition Tool from the PureApplication System user interface.


Work with virtual application pattern plug-ins

Plug-ins provide the constituent parts of a virtual application and the underlying implementation so that the application is deployable in the cloud. Pattern types, the containers of solution-specific and topology-specific resources required for different types of virtual applications, are collections of plug-ins.

Plug-ins contribute components, links, and policies that are available in the Virtual Application Builder. They are grouped into pattern types. When a virtual application builder creates a virtual application pattern, the first step is choosing the pattern type. This choice determines the options and the user experience in the Virtual Application Builder. In the Virtual Application Builder, virtual application builders can select from the components, links, and policies that the plug-ins in the pattern type provide. The plug-in that contributes a component or link completely determines its semantics and operation. Components, links, and policies are the most user-visible capabilities a plug-in can contribute, but a plug-in developer must also include other capabilities. Plug-ins are responsible for the implementation of components and links when a virtual application is deployed, and maintenance through the entire lifecycle of the virtual application. A plug-in must contribute appropriate lifecycle scripts to manage the virtual application through its various lifecycle events.

A pattern type is a collection of plug-ins that are designed for a specific type of virtual application pattern and used as the foundation of a virtual application. For example, a web application uses the IBM Web Application Pattern, and a database application uses the IBM Database Application Patterns. When a user selects a pattern type in the Virtual Application Builder, the design experience is determined by the associated plug-ins.

For more information about working with plug-ins, see the Related tasks section.


Plug-ins included with pattern types

Several preinstalled system plug-ins are available with the IBM PureApplication System W1500 pattern types. You can use these plug-ins to extend the function of virtual applications. The plug-ins contain the necessary code that you need for component parts when you build virtual application patterns, including caching, database, applications, transaction process, and messaging services.

The following table lists pattern types included with the product license. Pattern types that are not included in the product license require a separate license purchase.


Pattern types included in the product license

Pattern type Description
IBM Foundation Pattern A pattern type that provides shared services for deployed virtual applications such as monitoring and load balancing.
IBM Web Application Pattern A pattern type to build and deploy web applications.
IBM Database Patterns A pattern type to build and deploy database instances.
Application Pattern Type for Java. A pattern type to build and deploy Java applications.

In the Virtual Application Builder, components are grouped into categories such as application, database, messaging, OSGi, transaction processing, and user registry. For a complete list of the virtual application component categories, see the Related information section.


IBM Foundation Pattern plug-ins


IBM Web Application Pattern plug-ins


IBM Database Patterns plug-ins


Application Pattern Type for Java plug-ins


Administer system plug-ins

Use the workload console to manage the system plug-ins in IBM PureApplication System W1500.


System plug-ins overview

Virtual application pattern types include a set of preinstalled system plug-ins.

System plug-ins contain the necessary code for component parts when you build virtual application patterns. The plug-in controls the end-to-end processing and implementation of the component parts that you use to build the virtual application pattern. Plug-ins also contribute components, links, and policies that you can choose to customize the virtual application pattern.

After you set up system plug-ins, you can build a virtual application pattern with component parts or edit an existing virtual application pattern. After enablement by the administrator, you can select the specified version and list all the plug-ins with the IBM version level, release, modification, and fix level (v.r.m.f) structure format.

To administer a plug-in, you must be assigned Workload resources administration with full permissions.

A plug-in is disabled if any of its attributes (configuration parameters) are not specified.


Add, modifying, and deleting system plug-ins

Use the workload console to add, modify, or delete a plug-in in the catalog. Plug-ins define components, links, and policies for virtual application patterns. To complete this task, you must have the Create new catalog content permission or the Workload resources administration with full permissions.

Plug-ins are disabled when configurations are not completed. You can use the workload console, the command line interface, or the REST API to complete this task. For the command line and REST API information, see the Related information section.

Click Cloud > System Plug-ins.


Adjusting virtual machine health status threshold values

You can adjust the virtual machine health status threshold values by editing the monitoring plug-in in the Foundation Pattern Type. To complete this task, you must have the Create new catalog content permission or the Workload resources administration with full permissions. The virtual machine health status is determined by processor, memory, and storage metrics. The default threshold values that are configured in this task activate after deployment. Existing virtual machines are not updated when these values are changed. All three metric types use the following default threshold values:

If any of the metrics are in Warning state, the virtual machine Status and Deployment status displays the secondary Warning status if it is in the Running state. If any of the metrics are in Critical state, the virtual machine Status and Deployment status displays the secondary Critical status if it is in the Running state.

  1. Click Cloud > System Plug-ins.

  2. Select the Foundation Pattern and click the monitoring plug-in.

  3. Click the Configure the plug-in icon on the toolbar.

  4. From the Configuration window, you can edit the threshold values.

    • CPU Critical Threshold
    • CPU Warning Threshold
    • Memory Critical Threshold
    • Memory Warning Threshold
    • Storage Critical Threshold
    • Storage Warning Threshold

    Change the threshold value to a new value in the corresponding fields and click Update.


Plug-in development guide

Plug-ins define the components, links, and policies that you use in the Virtual Application Builder to create virtual application patterns, or extend existing virtual application patterns. This guide describes how to develop your own custom plug-ins. Custom plug-ins add behavior and function that users can exploit to enhance and customize the operation of their virtual applications.

JSON Formatter and Validator

JSONLint: The JSON Validator

Python documentation: Subprocess management


Plug-in Development Kit

The Plug-in Development Kit (PDK) is designed to help you build your own plug-ins. Plug-ins provide the capabilities to create, deploy, and manage virtual applications in Virtual Application Builder.


Virtual application patterns

PureApplication System provides a generic framework for designing, deploying, and managing virtual applications. The model that you build by using the application artifacts and quality of service levels is called a virtual application pattern. You can use predefined patterns, extend existing patterns, or create new ones.

When you build a virtual application pattern, you create the model of a virtual application by using components, links, and policies.

Consider an order management application with the following requirements:

A virtual application builder can use PureApplication System to create a virtual application pattern by using components, links, and policies to specify each of these parameters.

Component

Represents an application artifact such as a WAR file, and attributes such as a maximum transaction timeout. In terms of the order management application example, the components for the application are the WAS nodes and the DB2 nodes. The WAS components include the WAR file for the application, and the DB2 components connect the application to the existing DB2 server.

Link

A connection between two components. For example, if a web application component has a dependency on a database component, an outgoing link from the web application component to the database component defines this dependency. In terms of the order management application example, links exist between the WAS components and the DB2 components.

Policy

Represents a quality of service level for application artifacts in the virtual application. Policies can be applied globally at the application level or specified for individual components. For example, a logging policy defines logging settings and a scaling policy defines criteria for dynamically adding or removing resources from the virtual application. In terms of the order management application example, a Response Time Based scaling policy is applied that scales the virtual application in or out to keep the web response time 1000 - 5000 ms.

When you deploy a virtual application, the virtual application pattern is converted from a logical model to a topology of virtual machines deployed to the cloud. Behind the scenes, the system determines the underlying infrastructure and middleware that is required for the application, and adjusts them as needed to ensure that the quality of service levels that are set for the application are maintained. A deployed topology that is based on a virtual application pattern is called a virtual application instance. You can deploy multiple virtual application instances from a single virtual application pattern.

The components, links, and policies that are available to design a particular virtual application pattern are dependent on the pattern type that you choose and the plug-ins that are associated with the pattern type.


Virtual application pattern types and plug-ins

A pattern type represents a collection of related components, links, and policies used to build a set of virtual applications. A pattern type defines a virtual application domain. For example, the IBM Web Application Pattern pattern type defines a domain in which J2EE web applications are deployed. It includes components for WAR, EAR, and OSGiEBA files. These components have an attribute for the appropriate archive file, which an application builder specifies during construction of the virtual application pattern.

The web application can connect to a database, so the pattern type also includes a component to represent the database and provides its connection properties as attributes. The pattern type also defines a link between the database and the WAR file to represent communication between the application and the database.

The application components (WAR, EAR, OSGiEBA) can all be configured with quality of service levels by applying policies. The available options include scaling, routing, logging, and JVM policies.

The plug-ins that are associated with Web Application Pattern define these components, links, and policies. They also provide the underlying implementation to deploy the virtual applications in the cloud and perform maintenance on deployed virtual application instances.

Virtual application builders create virtual application patterns in the Virtual Application Builder. Within Virtual Application Builder, you begin by selecting the pattern type to use. This choice determines the set of components, links, and policies that you can use to build the virtual application and the type of virtual applications that you can create.

Plug-in developers are responsible for creating or customizing pattern types and plug-ins that control the available components, links, and policies and corresponding configuration options, as well as the code for implementing deployments.


Virtual applications and plug-ins

PureApplication System provides a Virtual Application Builder to create virtual application patterns. With a virtual application pattern, components, links and policies can have attributes, such as user IDs, passwords, database names, or archive files, such as a WAR file. Virtual application builders assign values to attributes when creating a virtual application pattern. The completed virtual application pattern is saved as an application model JSONObject. The application model can then be deployed.

Plug-ins contribute components, links, and policies that are available in the Virtual Application Builder. They are grouped into pattern types. When a virtual application builder creates a virtual application pattern, the first step is choosing the pattern type. This choice determines the options and the user experience in the Virtual Application Builder. In the Virtual Application Builder, virtual application builders can select from the components, links, and policies that the plug-ins in the pattern type provide. The plug-in that contributes a component or link completely determines its semantics and operation. Components, links, and policies are the most user-visible capabilities a plug-in can contribute, but a plug-in developer must also include other capabilities. Plug-ins are responsible for the implementation of components and links when a virtual application is deployed, and maintenance through the entire lifecycle of the virtual application. A plug-in must contribute appropriate lifecycle scripts to manage the virtual application through its various lifecycle events.


Contents of the PDK

The PDK is a zip package that available as a download from developerWorks or from IBM PureApplication System W1500 itself. It includes a plug-in and pattern type build environment, samples, and a tool to create a plug-in starter project.
File or directory Description
docs Contains documentation for the PDK.

  • docs/index.html Open this file with your web browser to view a list of hyperlinks to the documentation in the docs directory.
  • docs/PDKSampleUsersGuide.pdf

    The PDK Samples User Guide. The Samples are also located in the information center.

  • docs/javadoc

    This directory contains Javadoc for PureApplication System interfaces that the plug-ins can invoke from the Java. code.

  • docs/pydoc

    This directory contains documentation for the maestro module used in lifecycle Python scripts for nodeparts and parts.

iwd-pdk-workspace The root directory of your plugin development workspace. Each plug-in and pattern type has its own project directory in this root directory. These directories can be used directly from the command line or imported into Eclipse as plug-ins.
com.ibm.maestro.plugin.pdk.site.zip Contains an Eclipse plug-in that you can use in your Eclipse or Rational Application Developer environment to create and edit some of the configuration files for your pattern type plug-ins.
pdk-debug-{version}.tgz This file is the debug plug-in that can be installed into the PureApplication System instance and used to develop and debug the plug-ins. The plug-in includes features to deploy and debug a topology document, which is a JSON object, and debug plug-in installation and lifecycle Python scripts on deployed nodes. It does not support debugging for Java code in plug-ins.
pdk-unlock-{version}.tgz The unlock plug-in enables you to delete a plug-in that is in use by a deployed application, replace it with an updated version, and activate the modified plug-in on deployed virtual machines in the application.


plugin.depends tool

The plugin.depends tool is provided in the IBMWorkloadPluginDevKit_<version>.zip file. This plug-in development tool is a standard OSGi plug-in project. The tool includes PureApplication System plug-in libraries for development, build tools for plug-in and pattern types, and an Ant build library, including:


Install the Plug-in Development Kit

To get started using the Plug-in Development Kit, first download and install the kit.

If you choose to download the PDK from developerWorks, ensure that you download the correct PDK version for your version of IBM PureApplication System W1500.


Product and PDK version compatibility

PDK version Product that includes this PDK Product versions supported by built plug-ins Significant updates
1.0.0.4 IBM PureApplication System 1.0

  • IBM PureApplication System 1.0

Added shared service example, added pattern type validation, enhanced PDK eclipse extension for supporting pattern type development, enhanced the PDK readme (pdk-doc-index.html), added virtual application template to hello sample.
1.0.0.5 IBM PureApplication System 1.0.0.2

  • IBM PureApplication System 1.0.0.2 or later

Added WAS Community Edition (WAS CE) sample.

  1. Download the PDK by using one of the following methods:

    • Download the files from the developerWorks wiki: IBM Plug-in Development Kit
    • Click the Workload Console tab at the top of the Welcome page to open the workload console. Click the Download Tooling link on the Welcome page, and select the Plug-in Development Kit.

  2. Extract the .zip file into a directory.

  3. From the command line, change to the directory where you extracted the contents of the .zip file and run Ant from the command line. The license for the PDK is displayed.

    You must accept the PDK license to continue and unpack the contents of the PDK.

    The following files are extracted into the directory:
    File or directory Description
    docs Contains documentation for the PDK.

    • docs/index.html Open this file with your web browser to view a list of hyperlinks to the documentation in the docs directory.
    • docs/PDKSampleUsersGuide.pdf

      The PDK Samples User Guide. The Samples are also located in the information center.

    • docs/javadoc

      This directory contains Javadoc for PureApplication System interfaces that the plug-ins can invoke from the Java. code.

    • docs/pydoc

      This directory contains documentation for the maestro module used in lifecycle Python scripts for nodeparts and parts.

    iwd-pdk-workspace The root directory of your plugin development workspace. Each plug-in and pattern type has its own project directory in this root directory. These directories can be used directly from the command line or imported into Eclipse as plug-ins.
    com.ibm.maestro.plugin.pdk.site.zip Contains an Eclipse plug-in that you can use in your Eclipse or Rational Application Developer environment to create and edit some of the configuration files for your pattern type plug-ins.
    pdk-debug-{version}.tgz This file is the debug plug-in that can be installed into the PureApplication System instance and used to develop and debug the plug-ins. The plug-in includes features to deploy and debug a topology document, which is a JSON object, and debug plug-in installation and lifecycle Python scripts on deployed nodes. It does not support debugging for Java code in plug-ins.
    pdk-unlock-{version}.tgz The unlock plug-in enables you to delete a plug-in that is in use by a deployed application, replace it with an updated version, and activate the modified plug-in on deployed virtual machines in the application.

The PDK is downloaded and installed. Now you must complete the task of Setting up the plug-in development environment.


Set up the plug-in development environment

Set up the environment to develop custom plug-ins used in IBM PureApplication System W1500. The following products are required before you set up the environment:

The following tasks show you how to build the plugin.depends project and the sample plug-ins and pattern types. The plugin.depends project is required when you build projects, so be sure that you build this project even if you choose not to build the sample plug-ins and pattern types.


Build the sample plug-ins and pattern types from the command line

About this task

These steps show you how to build the sample plug-ins and pattern types from the command line. See the sections Sample: Creating a plug-in project from the command line and Sample: Building a single plug-in and pattern type with the command-line tools for steps to build your own plug-ins and pattern types from the command line.

Procedure

Build the plugin.depends project

  1. Type cd iwd-pdk-workspace/plugin.depends from the command line.

  2. In the plugin.depends project, run the build.xml Ant script with the following command:

    ant
    
    The command builds all the plug-ins in the workspace.

Build the Hello sample plug-ins and pattern type

  1. Change directory to the patterntype.hello project and run the build.patterntype.xml script. Type ant -f build.patterntype.xml. This command builds the pattern type.

  2. Change directory to the patterntype.hello project. A folder named export displays.

  3. Go to the root of the export folder. The .tgz pattern type binary file is located here. It is ready for installation into the catalog.

Build the WAS Community Edition sample plug-ins and pattern type

To use the WAS CE sample, you must first download the WAS CE binary file. Then, you can build the WAS CE plug-ins (from plugin.depends) and the WAS CE pattern type as shown in the following steps, which demonstrate how to use the -Dstorage.dir command to package the WAS CE binary file with the sample.

  1. Create a storage directory. In the following steps, this storage directory is referred to as <storage_dir>.

  2. Download the binary file for the WAS Community Edition (WAS CE) server to <storage_dir>/wasce so that you can add it to the pattern type:

    1. Go to the download page for WAS CE on DeveloperWorks: https://www.ibm.com/developerworks/downloads/ws/wasce/.

    2. Click Download.

    3. Log in using your DeveloperWorks user account.

    4. Download the Server for UNIX to <storage_dir>/wasce. The file name is wasce_setup-3.0.0.x-unix.bin.

      Note: The file name will vary depending on the current version.

  3. Update config.json with the file name for the WAS CE version that you downloaded in Step 7.

    1. Open plugin.com.ibm.wasce-1.0.0.x/plugin/config.json in a text editor.

      Note: The name of this directory will vary depending on the version of the sample that you are using.

    2. Change the file name referenced by config.json to the name of the file that you downloaded in Step 7. You must change the file name in two places:

      "files": [
            "wasce\/wasce_setup-3.0.0.2-unix.bin"
         ],
      
      and

      "parts": [
                     {
                        "part": "parts\/wasce.tgz",
                        "parms": {
                           "binaryFile": "wasce\/wasce_setup-3.0.0.2-unix.bin"
                        }
                     }
                  ]
               }
            ],
      

    3. Save your changes.

    4. Run the build.plugin.xml Ant script in the plugin.com.ibm.wasce-1.0.0.x project to rebuild the plug-ins for the WAS CE sample. To run the Ant script, change directory to plugin.com.ibm.wasce-1.0.0.x from the command line. Type ant -f build.plugin.xml.

      Note: The name of this directory will vary depending on the version of the sample that you are using.

  4. Change directory to the patterntype.wasce.ptype project and run the build.patterntype.xml script. Type ant -f build.patterntype.xml -Dstorage.dir=<storage_dir>. This command builds the pattern type and copies the WAS CE binary file into the pattern type.

    Note: If your files are on a remote site, use the -Dstorage.url parameter. For example, ant -f build.patterntype.xml -Dstorage.url=<remote server URL>.

  5. The .tgz pattern type binary file, wasce.ptype-1.0.0.1.tgz, is now ready for import into the catalog. When you import the pattern type to the system, the WAS CE binary file is installed into the Storehouse. You can see the file by using the Storehouse Browser.

    If you want to change the plug-in or pattern type, you can install a new version of it without using the -Dstorage.dir=storage_dir parameter during your build or including the WAS CE binary file because you use the same WAS CE binary file that is already in the Storehouse. Using this method allows for faster import time for your enhancements. For more information about the -Dstorage.dir and -Dstorage.url parameters, see Pattern type packaging reference.

Results

You built the sample pattern type and plug-ins. You are now ready to deploy the sample virtual applications, or to use the Virtual Application Builder to build your own applications.

What to do next

If you want to explore plug-in development with the sample pattern type, see Samples for the Plug-in Development Kit.


Build the sample plug-ins and pattern types in Eclipse

About this task

These steps show you how to build the sample plug-ins and pattern types in Eclipse. See the section Sample: Developing a plug-in and pattern type with Eclipse for steps to build your own plug-ins and pattern types in Eclipse.

Procedure

Build the plugin.depends project

  1. Import the PDK plugin.depends project and the sample source projects.

    1. Create a workspace and start Eclipse.
    2. Click File > Import > General > Existing Projects into Workspace. Select Select root directory. Click Browse to select the iwd-pdk-workspace directory where you downloaded and expanded the pdk-<version>.zip file.
    3. Select plugin.depends, and the sample projects to build them. When the import is complete, the projects are added to your workspace.

  2. Build all plug-ins in the workspace. Go to the plugin.depends project and run the build.xml Ant script. To run the Ant script, right-click on the file and select Run As > Ant Build.

Build the Hello sample plug-ins and pattern type

  1. Build the hello pattern type. Go to the patterntype.hello project and run the build.patterntype.xml script. To run the Ant script, right-click on the file and select Run As > Ant Build.

  2. Refresh the patterntype.hello project. A folder named export displays. Go to the export folder. The .tgz pattern type file is located here. It is ready for installation into the catalog.

Build the WAS Community Edition sample plug-ins and pattern type

To use the WAS CE sample, you must first download the WAS CE binary file. Then, you can build the WAS CE plug-ins (from plugin.depends) and the WAS CE pattern type as shown in the following steps, which demonstrate how to use the -Dstorage.dir command to package the WAS CE binary file with the sample.

  1. Create a storage directory. In the following steps, this storage directory is referred to as <storage_dir>.

  2. Download the binary file for the WAS Community Edition (WAS CE) server to <storage_dir>/wasce so that you can add it to the pattern type:

    1. Go to the download page for WAS CE on DeveloperWorks: https://www.ibm.com/developerworks/downloads/ws/wasce/.

    2. Click Download.

    3. Log in using your DeveloperWorks user account.

    4. Download the Server for UNIX to <storage_dir>/wasce. The file name is wasce_setup-3.0.0.x-unix.bin.

      Note: The file name will vary depending on the current version.

  3. Update config.json with the file name for the WAS CE version that you downloaded in Step 6.

    1. Expand the plugin.com.ibm.wasce-1.0.0.x project on the Project Explorer tab in Eclipse.

    2. Expand the plugin folder in the plugin.com.ibm.wasce-1.0.0.x project.

      Note: The name of this directory will vary depending on the version of the sample that you are using.

    3. Double-click config.json to open it in the Config Json Editor. Alternately, you can right-click on the file and select Open With > Config Json Editor.

    4. Select the config.json tab in the editor.

    5. Change the file name referenced in config.json to the name of the file that you downloaded in Step 6. You must change the file name in two places:

      "files": [
            "wasce\/wasce_setup-3.0.0.2-unix.bin"
         ],
      
      and

      "parts": [
                     {
                        "part": "parts\/wasce.tgz",
                        "parms": {
                           "binaryFile": "wasce\/wasce_setup-3.0.0.2-unix.bin"
                        }
                     }
                  ]
               }
            ],
      

    6. Save your changes.

    7. Run the build.plugin.xml Ant script in the plugin.com.ibm.wasce-1.0.0.x project to rebuild the plug-ins for the WAS CE sample. To run the Ant script, right-click on the file and select Run As > Ant Build.

  4. Build the WAS CE pattern type. Go to the patterntype.wasce.ptype project and run the build.patterntype.xml script by using the -Dstorage.dir argument. To run the Ant script, right-click on the file and select Run As > Ant Build. Go to the Main tab, and add -Dstorage.dir=<storage_dir> to the arguments section. Click Run. This command builds the pattern type and copies the WAS CE binary file into the pattern type.

    Note: If your files are on a remote site, use the -Dstorage.url parameter. For example, ant -f build.patterntype.xml -Dstorage.url=<remote server URL>.

  5. Refresh the patterntype.wasce.ptype project. A folder named export displays. Go to the export folder. The .tgz pattern type file is located here. It is ready for installation into the catalog.

    If you want to change the plug-in or pattern type, you can install a new version of it without using the -Dstorage.dir=storage_dir parameter during your build or including the WAS CE binary file because you use the same WAS CE binary file that is already in the Storehouse. Using this method allows for faster import time for your enhancements. For more information about the -Dstorage.dir and -Dstorage.url parameters, see Pattern type packaging reference.

Results

The environment is set up and the sample pattern type and plug-ins are built.

What to do next

If you want to explore plug-in development with the sample pattern type, see Samples for the Plug-in Development Kit.


Plug-in development overview

This section describes plug-in development concepts and how they relate to creation and management of virtual application patterns.


Plug-in concepts

Whether you are using existing plug-ins or developing custom plug-ins, it is useful to get familiar with plug-in concepts and the contents of a plug-in.

Plug-ins contribute components, links, and policies that display in the Virtual Application Builder. In the Virtual Application Builder, virtual application builders can select from the components, links, and policies that the plug-ins in the pattern type expose. The plug-in that contributes a component or link completely determines its semantics and operation, including the settings that can be configured, how it is deployed, and how it is managed and configured throughout the lifecycle of the virtual application.


Contents of a plug-in

A plug-in is a collection of files that are packaged as a .tgz archive file. Each plug-in must contain a configuration file named config.json. More files that implement extensions for modeling, deployment, run time, and management are optional and organized by convention into a directory structure in the archive file.

The following is a list of plug-in contents and extensions:

config.json

Plug-in configuration file. The only required file for a plug-in.

appmodel/metadata.json

Components, links, and policies in the plug-in that are exposed to users in the Virtual Application Builder to build a model of a virtual application.

appmodel/tweak.json and appmodel/operation.json

Provides code for changing a deployed virtual application instance from the deployment inlet in the console.

bundles/{name}.jar

Specifies the Java. archive (JAR) file that contains the scanners, transformers, and provisioners of the plug-in.

nodeparts/{name}.tgz

Artifacts that are installed by the activation script. The workload agent is a node part.

parts/{name}.tgz

Specifies artifacts that are installed by the workload agent. These extension files are included with the plug-in to communicate with the console how the virtual application is modeled, deployed, and managed by the virtual machine for which it is deployed.

For more information about how the contents of a plug-in work, see Plug-in development guide.


Parts, node parts, and packages

Node parts are data on the virtual machine, such as a script, that can be used by other parts. For example, the firewall node part opens and closes ports in the firewall.

Parts are more sophisticated. Consider the sample Java Platform, Enterprise Edition application, tradelite. An application on WAS accesses a deployed DB2 database. WAS must be configured to use the database with information such as the IP address, port number, user ID, and password. In addition, the database and its node must be started before WAS can use it. To manage this complexity, parts use roles to coordinate the startup of all the virtual machines and software in the deployment. Roles have states, and might have lifecycle scripts when they enter each state.

Packages are collections that consist of parts, node parts, or a combination of both. They are also a convenience mechanism for plug-in developers. For example, the WAS package has two parts. The parts are different for AIX on PowerPC and Linux on Intel, and for 32-bit and 64-bit hardware architecture. Four different sets of two parts total eight parts. Within a single package, each set has its own requires block that identifies the operating system and hardware requirements. Specify the minimum memory requirement in megabytes, and the minimum disk requirement in gigabytes. Then, in your component or link transform, you would include the WAS part, and the transform determines which set of parts to use.

Packages have names, and their names must be unique, with one exception. The package name of default is special. The default package is automatically included on all virtual machines in the deployment, so no component or link transform must explicitly include it. In addition, multiple plug-ins can define a default package. What happens here is all default definitions are accumulated with a logical OR union to create one consolidated default package.

Because parts, node parts, and packages organize the implementation, users do not need to know the details and decisions about implementation that are automated in the background.


Roles

Roles provide lifecycle scripts to coordinate the startup of software on virtual machines. In parts, they offer event notification. You can also synchronize two virtual machines, as in the tradelite sample application. WAS cannot fully start and be operational until the DB2 instance is running. In fact, the WAS instance needs information such as the port and IP address, and user ID and password from the DB2 node to successfully access it. Two roles are used to handle the connection: a WAS role on the WAS virtual machine and a DB2 role on the DB2 virtual machine. The wasdb2 plug-in provides the link between the WAS and DB2 components, and its link transform creates a dependency of the WAS role on the DB2 role. When the DB2 role is active, it exports the required data to its role name space in the shared infrastructure maestro data store. As a result of the dependency, the WAS/DB2/changed.py script is started. This script extracts the exported data from the DB2 role, and uses it to configure the WAS instance to use the linked DB2 instance, and then sets the status of the WAS node to RUNNING.

In the application model (constructed in the Virtual Application Builder), components usually correspond to deployed virtual machines, and links usually correspond to dependencies between them, but not always. The application model and the topology document are independent. Plug-ins contribute components, links, and policies to the Virtual Application Builder pane. The user builds a logical view of the application and provides attribute values. It is up to the component and link transforms to generate topology document fragments used to realize a deployment that corresponds to what the user created by using the concepts and features that the plug-ins provide. To generate a deployed virtual machine, a component transform generates a vm-template element in a vm-templates JSON array. As implied by the array value, multiple templates can be generated. For a dependency, a link transform is expected to generate a role depends element. The link transform must identify the source and target roles. The JSON returned by the link transform is inserted as the depends element of the identified source role in the source vm-template.


Topology document

There are two major topology document features that are important for understanding what component and link transforms must do:

The topology document is a JSON object. The vm-templates element in the topology document is a JSON array of vm-template elements. Each element in the array represents a virtual machine to deploy. Components correspond to vm-templates, and links correspond to links or dependencies between them. However, a component does not have to generate a vm-template, and it might generate more than one. So, it is expected that a component generates zero or more vm-templates.

The next task of your pattern type is to install, configure, and start the appropriate software on each of the deployed virtual machines. First, you must get software deployed to each virtual machine. Software is packaged in part and node part elements, which are defined in the config.json file that is contained in the plug-in. These parts are further packaged into packages in config.json, and can be further qualified with requires elements, to associate parts and node parts to specific virtual types, for example Intel x86 or PowerPC architecture, 32-bit or 64-bit processor, Linux or AIX operating system. Virtual machine processor, memory, and disk requirements can also be specified. The transforms specify software to install on the virtual machine by inserting package names into the packages element of the vm-template object. The packages element is a JSON array.

Next, you must determine whether you want to use parts or node parts for implementation.

Node parts are installed by the activation script and generally contain binary files and scripts to augment the operating system. Node parts have a setup.py script, and can install start .py or .sh scripts. This activity happens before the agent is started. For example, the firewall node part defines a generic API for shell scripts and Python scripts to manipulate the firewall settings on the virtual machine. During activation, each node part is downloaded and extracted to /0config, then its setup/setup.py script is started, if one exists. After all node parts are set up, installation scripts from common/install are started, in alphanumeric order (0_install1.sh, 1_install1.py, .). Finally, start scripts from common/start are started, in alphanumeric order. Use node parts when you do not need the support that is provided by the workload agent.

Parts are installed by the workload agent and generally contain binary files and lifecycle scripts that are associated with roles and dependencies. First, each part is downloaded and extracted, and its install.py script is run, if one exists.

A role represents a managed entity within a virtual application instance. Each role is described in a topology document by a JSON object, which is contained within a corresponding vm-template.

Roles are started in parallel by running lifecycle scripts for each role. Roles have states, and the typical state progression for a role is as follows:

  1. INITIAL: Roles start in the initial state. The install.py script for each role is started. If it completes successfully, the role progresses automatically to the INSTALLED state. If the install.py script fails, the role moves to the ERROR state, and the deployment fails.
  2. INSTALLED: From this state, the configure.py script runs, if one exists.
  3. CONFIGURING: From this state, the start.py script runs, if one exists.
  4. STARTING: The automatic state setting stops. A lifecycle script must explicitly set the role state to RUNNING.
  5. RUNNING

There is a considerable amount of state that the lifecycle scripts can access from the maestro module. This state includes most of the relevant data from the topology document, and is how component and link transforms pass attributes and data to the lifecycle scripts on deployed virtual machines.

Roles participate in virtual machine startup, shutdown, lifecycle management, and can react to events. A key use of roles is in implementing links between components in the application model. The wasdb2 plug-in implements the link between a WAR, EAR, or OSGiEBA file running on a WAS node and the database instance it accesses. A DB2 database instance has a DB2 role used to start DB2, passing in and using attributes and data at the appropriate times in its lifecycle scripts. When the DB2 role moves to the RUNNING state, in its start.py script, it exports some useful data items: its IP address, port number, database name, user ID, and password. The wasdb2 plug-in, as part of implementing the link in the application model, sets up the WAS role in the WAS node to depend on the DB2 role in the DB2 node. When both the WAS and DB2 roles transition to the RUNNING state, the WAS/DB2/changed.py script is started. That script then reads the data that the DB2 role exported on the DB2 node, and uses it to configure the WAS node to access that DB2 database, the one linked to in the application model. Thus, setting up roles to depend on each other provides important dependency processing and synchronization during startup. If you must pass information between deployed virtual machines during startup, use roles and dependent roles for the implementation.


Virtual application lifecycle

Plug-ins are involved in the creation, deployment, and management of a virtual application.


Create the application model

To create or update a virtual application, you use the Virtual Application Builder in PureApplication System to define the application model. The virtual application model is defined by the user with components, links, and policies that are available within the selected pattern type and any secondary pattern types that are associated with the selected pattern type.

Components are fundamental building blocks used to construct the virtual application model. Select components on the Virtual Application Builder canvas and provide values for the component attributes. Then, you can add policies and links between components, and then configure values for their attributes.

When you are building the virtual application, the back-end of the Virtual Application Builder does a background scan of the user artifacts to help guide the modeling of the connections between the components, links, and policies. For example, the EAR file in the WAS enterprise application component scans the EAR file for a list of required resources.

A plug-in controls the end-to-end handling of its capabilities, starting with the components, links, and policies used to model the artifacts that are associated with the plug-in. The appmodel/metadata.json file specifies these components, links, and policies, and the corresponding property sheets for user customization.

A plug-in provides components, links, and policies to the user for attributes that can be configured in the Virtual Application Builder. Exposing a specific subset of configuration options guides the user to model and construct virtual applications in the manner that is supported by the plug-in developer. These artifacts might not correspond to deployed virtual machines, but rather reflect the logical model that the plug-in exposes to the user.


Deploy the application

Kernel services store the application that you created with the Virtual Application Builder and the next step is deploying the application.

Plug-ins can provide implementations of Java. interfaces like TopologyProvider, TopologyProcessor, SerivceProvisioner, and PostProvisioner to provide function for the successful deployment of the virtual application. These implementations are packaged as OSGi Declarative Services (DS).

When you deploy an application from the Virtual Application Console to your target cloud, the component transforms are started and then the link transforms are started.

Specifically, kernel services are started to convert the application model from a logical description into a topology document or physical description by using the TopologyProvider and TopologyProcessor transformers. The component transforms must detect applicable policies and transform them. Required resources are provisioned with the ServiceProvisioner and PostProvisioner provisioners, and the Infrastructure as a Service (IaaS) is started to launch the virtual machines used to realize the application.

The transformers and provisioners are in the plug-in bundles/{name}.jar file. The plug-in nodeparts/{name}.tgz file contains the node parts that are downloaded and installed with the activation script on the virtual machine. The parts/{name}.tgz file contains the parts that are downloaded and installed with the workload agent on the virtual machine.

The following diagram shows more details of the end-to-end process.

  1. Plug-ins use implementations of TopologyProvider to convert instances of their components, links, and policies from the application model into an unresolved topology. The unresolved topology is generic; the transformers specify abstract package names rather than specific node parts and parts, and the images and instance types are not yet specified. A package represents a specific function, for example, DB2. A plug-in might provide various implementations of that function, like DB2 for Linux or DB2 for AIX, therefore, transformers are not required to be aware of cloud-specific details like what operating system images are available.

    A TopologyProvider transforms a component, link, or policy in the application model into a topology document fragment (both the application model and the topology document fragment are JSON objects). PureApplication System merges these fragments into the unresolved topology document. TopologyProcessors then process this unresolved topology document and can further change it.

    PureApplication System provides a TemplateTransform class to ease development. The TemplateTransform class provides you with the capability to write a transform as a JSON object directly by using the Apache Velocity templating engine. If more function is required, you can include JSON code, Velocity macros, and Java method invocation.

  2. After the unresolved topology document is complete, the next step resolves the specific node parts and parts, and images and instance types, according to the best fit for the cloud details and plug-in configuration, as specified in the config.json file. For example, the best supported operating system, machine size, including processor, memory, and disk, and 32 versus 62-bit architecture are chosen. The resolved topology is passed to a provisioning phase where resources are provisioned from shared services, for example, a grid is requested from the shared caching service. Plug-ins provide the specific provisioners and the results are inserted into the topology to create the final topology document. The final plug-in developer exit point is the PostProvisionier, which is started after all services are provisioned and after the topology document is finalized and written to the storehouse. The topology document is written only once to the storehouse, and is never updated. A separate deployment document is written to the storehouse to represent the deployed virtual application. This document is written and updated many times, and reflects the current state and status of the deployed virtual application.
  3. Deployment culminates in virtual machines deployed in the target IaaS cloud.

    1. As a part of the activation process, a script on each virtual machine downloads and parses the topology document; the required node parts that are specified in the topology are downloaded and installed.
    2. The workload agent is a node part. The workload agent parses the topology document, and downloads and installs the required parts. Finally, the workload agent initiates the lifecycle scripts for the specified roles and dependencies. The natural progress of the lifecycle scripts start and maintain the application through failure recovery.


Example deployment process

Consider deploying an application on WAS that uses a DB2 database. See the section, Application model and topology document examples, in the topic, Plug-in development guide, for the detailed documents.

PureApplication System ships a WAS plug-in that installs a WAS image on an IaaS node or virtual machine, a DB2 plug-in that installs a DB2 image on a virtual machine, and a WAS DB2 plug-in that connects the two.

The WAS and DB2 plug-ins each provide a component in the Virtual Application Builder. You can drag each component to the canvas to create a virtual application. The Web Application component has two attributes: the application WAR file and its context root. The context root is a URL in a web browser to access the application.

The DB2 component has attributes like database name, disk size, and, optionally, the .sql or .ddl file that describes the database schema.

The DB2 plug-in in the Web Application Pattern pattern type provides the link to connect these two components. Using the Virtual Application Builder, you can connect these two components with a link. The link also has parameters, like the Java Naming and Directory Interface (JNDI) name of the data source and resource references that the application uses. You must complete these attribute values in the Virtual Application Builder. The WAR file is scanned and populates the list of JNDI data source names and resource references that are found in the WAR file. All three plug-ins provide parts with lifecycle scripts as parts. The WAS and DB2 plug-ins also both provide WAS and DB2 execution images (binary files) as parts. Installation of these parts is done during the deployment process. The WAS and DB2 plug-ins transform each of their components into virtual machine image templates for which IaaS instantiates a virtual machine node.

The WAS plug-in also provides a policy for setting Java Virtual Machine (JVM) parameters, like heap size, for all WAS nodes in the deployment. The lifecycle scripts use the attributes that are set in the topology document during the transform and deploy process to start the nodes. The WAS plug-in adds a WAS role to the WAS virtual machine used to start and configure WAS on the virtual machine.

Similarly, the DB2 plug-in adds a DB2 role to the DB2 virtual machine. The WAS DB2 plug-in implements the dependent DB2 role that links from the WAS role to the linked DB2 instance. When the DB2 instance comes up, it exports parameters like database name, host name, port, user ID, and password. The WAS DB2 changed.py script runs when both WAS and DB2 are started. It imports the DB2 parameters, and, by using its own attributes, configures WAS to use the wanted DB2 instance.


Manage the deployed application

When the virtual application is deployed, the application becomes a virtual application instance. The plug-ins that make up the application continue to work behind the scenes even after the virtual application is deployed.

You can use the PureApplication System user interface to view the virtual machines on which your applications are deployed. You can use the deployment panel of the user interface to view the monitoring data and logs. The plug-in appmodel/tweak.json and operation.json files contain the mutable configuration for the deployed virtual application instance, while the node parts and parts, such as the workload agent, help manage the application through failure recovery.


Plug-in development steps

Plug-ins contribute components, links, and policies that you use in the Virtual Application Builder pane to create virtual application patterns, or extend existing virtual application patterns. You can develop your own custom plug-ins. Custom plug-ins add behavior and function that users can use to enhance and customize the operation of their virtual applications and create virtual application types. Download the Plug-in Development Kit (PDK). You can also download the PDK from the IBM PureApplication System W1500 user interface Welcome page.

Consider the following design principles as you plan and develop your plug-ins:

Use the following steps to develop your own custom plug-ins. The plug-in can be developed in the Eclipse tool or the integrated development environment (IDE) of your choice.

  1. Define and package plug-in artifacts.

    1. Define the config.json file.

      The config.json file is the only required file in a plug-in. The name, version, and patterntypes elements are all required. The name element specifies the name of the plug-in, and the version element defines the version number of the plug-in. The patterntypes element specifies the pattern types with which the plug-in is associated, and you must specify one primary or one secondary pattern type at minimum. You can specify only one primary pattern type, but you can associate the plug-in with one or more secondary pattern types.

      The following example is a WAS Community Edition plug-in that extends the IBM Web Application Pattern type:

      {
         "name"    : "wasce",
         "version" : "1.0.0.1",
         "patterntypes":{
         "secondary":[{ "*":"*"}]
         },
         "packages" : {
            "WASCE" : [ {
                  "requires"  : {
                     "arch"   : "x86_64",
                     "memory" :  512,
                     "disk"   :  300
                  },
                  "parts":[ {
                        "part"  : "parts/wasce.tgz",
                        "parms" : {
                           "installDir" : "/opt/wasce"
                        }
                  } ], 
            "WASCE_SCRIPTS":[ {
                  "parts":[ {
                        "part":"parts/wasce.scripts.tgz"
                  } ]
            } ]
            } ]
         }
       
      }
      
      This example defines most of the following common elements:

      • patterntypes element:

        The value is specified as webapp.

        This means that the capabilities contributed by this plug-in are available when you create patterns from the Web Application Pattern shipped with PureApplication System. No primary element and a secondary element of *:* means that it shows up in the Virtual Application Builder for all pattern types.

      • requires element:

        This element contains other elements that specify resource requirements of the plug-in.

        • os

          Specifies the operating system that the plug-in requires.

        • arch

          Virtual machine architecture that the plug-in requires. In the previous config.json sample code, the specified architecture is 64 bit, X86.

        • cpu

          Minimum processing capacity that is required for each package defined by your plug-in. The requires element specifies the required attributes of the package, all parts, and node-parts in it. For cpu, it represents the total required resources of each type for all parts and node-parts in the package.

        • memory

          Minimum memory requirement, in megabytes, for each package defined by your plug-in. The requires element specifies the required attributes of the package, all parts and node-parts in it. For memory, it represents the total required resources of each type for all parts and node-parts in the package.

        • disk

          Minimum disk requirement, in gigabytes, for each package defined by your plug-in. The requires element specifies the required attributes of the package, all parts and node-parts in it. For disk, it represents the total required resources of each type for all parts and node-parts in the package.

        Note: During the provisioning process, PureApplication System adds up the minimum processor, memory, and disk values for each package, and provisions a virtual machine that meets the specified requirements. The disk value is converted to megabytes when it is stored in the topology document, so if the required sizes exceed the size of the available images the value might be shown as megabytes in an error message.

      • packages element:

        Defines the file packages with both the part and node part elements. The example plug-in provides two packages: WASCE and WASCE_SCRIPTS. The WASCE package contains the parts/wasce.tgz part file. This archive contains the wasce image - all the files that compose wasce. The binary files are required to install WAS Community Edition, and package it directly in the plug-in.

        There are other options for specifying the required binary files. You can define a file attribute and have administrators upload the required binary files after the plug-in is loaded in PureApplication System. You can also link to a remote server that stores the required artifacts. The WASCE_SCRIPTS package provides the lifecycle scripts to install the WASCE image to the wanted location to install the enterprise archive (EAR) or web archive (WAR) file, and to start the server.

  2. Define configurable application model components.

    The web and enterprise application archive components are displayed in the Virtual Application Builder. Each component is specified in the metadata.json file that is in the plugin/appmodel directory of the plug-in archive and plug-in development project. The following example illustrates the JSON to define the web archive component. Thumbnail images must be 48 x 48 pixels.

    [{
         "id"          : "WARCE",
         "label"       : "Web Application (WAS Community Edition)",
         "description" : "A web application cloud component represents an execution service for Java EE Web applications (WAR files).",
         "type"        : "component",
         "thumbnail"   : "appmodel/images/WASCE.png",
         "image"       : "appmodel/images/WASCE.png",
         "category"    : "application",
         "attributes"  : [
             {
                 "id"          : "archive",
                 "label"       : "WAR File",
                 "description" : "Specifies the web application (*.war) to be uploaded.",
                 "type"        : "file",
                 "required"    : true,
                 "extensions"  : [ "war" ] 
             }
         ] 
    }] 
    
    There is a similar stanza for the enterprise archive component for its downloadable archive.

    The first type field of the listing is important. The value options for this field are component, link or policy, and this field defines the type in the application model. The ID of the component is WARCE. The ID can be any value if it is unique.

    The category refers to the tab under which this component is shown on the pane in the Virtual Application Builder. The attributes array defines properties for the component that you are defining. You can see and are able to specify values for these properties when you use this component in the Virtual Application Builder. Attribute types include file, string (shown here), number, boolean, array, and range.

  3. Define a template to convert the visual model into a physical model.

    Plug-ins must provide the knowledge and logic for how to implement, or realize, the deployment of the defined components. In the next example, the meaning of how to deploy an enterprise or web application component must be specified. A single transform is provided which translates the application model that is derived from what users build in the Virtual Application Builder into a concrete topology.

    The following example displays a Velocity template that represents a transformation of the component into a JSON object that represents a fragment of the overall topology document. Each component and link must have a transform. In our plug-in, the WARCE and EARCE components share the transform template.

    {
       "vm-templates":[
          {
             "name"     : "${prefix}-wasce",
             "packages" : [ "WASCE", "WASCE_SCRIPTS" ],
             "roles"    : [
                {
                   "plugin"       : "$provider.PluginScope",
                   "name"         : "WASCE",
                   "type"         : "WASCE",
                   "quorum"       : 1,
                   "external-uri" : [{"ENDPOINT":"http://{SERVER}:8080"}],
                   "parms":{
                      "ARCHIVE"   : "$provider.generateArtifactPath( $applicationUrl, ${attributes.archive} )"
                   },
                   "requires"     : { "memory":512, "disk":300 }
                }
             ],
     
            "scaling" : { "min":1, "max":1 }
          }
       ]
    }
    
    The topology fragment is a JSON object that contains a vm-templates element, which is an array of vm-templates. A vm-template is a virtual machine template, and defines the parts, node parts, and attributes of a virtual machine to be deployed. For this example, only a single vm-template that contains four important elements is needed:

    • name: Specifies a unique name for a deployed virtual machine.
    • packages: Specifies a list of parts and node parts that are installed on each deployed virtual machine. The WASCE entry indicates the use of the WASCE virtual image. The WASCE_SCRIPTS entry specifies the WASCE lifecycle scripts.
    • roles: Specifies parts in a plug-in that start lifecycle scripts for roles. You can have one or more roles in your plug-in, but in the sample plug-in there is a single WASCE role. When all roles on a node go to the RUNNING state, the node changes to the green RUNNING state.
    • requires: Specifies the minimum memory requirements, in megabytes, and disk requirements, in gigabytes, that are needed for the deployed virtual machine.

  4. Define lifecycle scripts to install, configure, and start software.

    In this step, you define the lifecycle scripts for the plug-in. This process includes writing scripts to install, configure, and start the plug-in components. You can view the complete scripts in the downloadable archives. The following information includes the key artifacts:

    • install.py script

      The install.py script copies the WASCE image from the download location to the wanted installDir folder. It also sets the installDir value in the environment for subsequent scripts. All parts and node parts that are installed by the PureApplication System agent are run as root. The chown .R virtuser:virtuser command changes file ownership of the installed contents to the wanted user and group. Finally, the install.py script makes the scripts in the WAS Community Edition bin directory executable files. The following sample code is the contents of the install.py script:

      installDir = maestro.parms['installDir']
      maestro.trace_call(logger, ['mkdir', installDir])
       
      if not 'WASCE' in maestro.node['parts']:
          maestro.node['parts']['WASCE'] = {} 
      maestro.node['parts']['WASCE']['installDir'] = installDir
       
      # copy files to installDir to install WASCE
      this_file = inspect.currentframe().f_code.co_filename
      this_dir = os.path.dirname(this_file)
      rc = maestro.trace_call(logger, 'cp -r %s/files/* %s' % (this_dir, installDir), shell=True)
      maestro.check_status(rc, 'wasce cp install error')
       
      rc = maestro.trace_call(logger, ['chown', '-R', 'virtuser:virtuser', installDir])
      maestro.check_status(rc, 'wasce chown install error')
       
      # make shell scripts executable rc = maestro.trace_call(logger, 'chmod +x %s/bin/*.sh' % installDir, shell=True)
      maestro.check_status(rc, 'wasce chmod install error')
      
      This example shows how the script uses the maestro module that is provided within the plug-in framework. The module provides several helper methods that are useful during installation and elsewhere.
    • wasce.scripts part and install.py script

      The wasce.scripts part also contains an install.py script. This script installs the WAS Community Edition lifecycle scripts. The following is an example of the install.py script in wasce.scripts:

      # Prepare (chmod +x, dos2unix) and copy scripts to the agent scriptdir
      maestro.install_scripts('scripts')
      
    • configure.py script

      The configure.py script in the wasce.scripts part installs the user-provided application to WAS Community Edition. The script takes advantage of the hot deployment capability of WAS Community Edition and copies the application binary files to a monitored directory. The following example includes the contents of the configure.py script:

      installDir = maestro.node['parts']['WASCE']['installDir']
       
      ARCHIVE = maestro.parms['ARCHIVE']
      archiveBaseName = ARCHIVE.rsplit('/')[-1]
      # Use hot deploy deployDir = os.path.join(installDir, 'deploy')
      if os.path.exists(deployDir) == False:
              # Make directories
              os.makedirs(deployDir) 
      deployFile = os.path.join(deployDir, archiveBaseName)
       
      # Download WASCE archive file maestro.download(ARCHIVE, deployFile)
      
    • start.py

      The start.py script in the wasce.scripts part is responsible for starting the WAS Community Edition process. After the process is started, the script updates the state of the role to RUNNING. When the deployment is in the RUNNING state, you can access the deployed application environment. The following example shows the use of the geronimo.sh start command to start WAS Community Edition, and the gsh.sh command to wait on startup:

      wait_file = os.path.join(maestro.node['scriptdir'], 'WASCE', 'wait-for-server.txt')
       
      installDir = maestro.node['parts']['WASCE']['installDir']
       
      rc = maestro.trace_call(logger, ['su', '-l', 'virtuser', installDir + '/bin/geronimo.sh', 'start'])
      maestro.check_status(rc, 'WASCE start error')
       
      logger.info('wait for WASCE server to start')
       
      rc = maestro.trace_call(logger, ['su', '-l', 'virtuser', installDir + '/bin/gsh.sh', 'source', wait_file])
      maestro.check_status(rc, 'wait for WASCE server to start error')
       
      maestro.role_status = 'RUNNING'
       
      logger.info('set WASCE role status to RUNNING')  
       
      logger.debug('Setup and start iptables')
      maestro.firewall.open_tcpin(dport=1099)
      maestro.firewall.open_tcpin(dport=8080)
      maestro.firewall.open_tcpin(dport=8443)
      
      

    There are other scripts and artifacts that make up the plug-in, but the preceding example provides an explanation of the most significant scripts.

Add your custom plug-in to PureApplication System where the plug-in can be used to create or extend a virtual application.


Plug-in development reference

This section provides plug-in development details for the entire virtual application pattern lifecycle.


Kernel services

Transformers are kernel services that convert the application model from a logical description into a topology document used to deploy the virtual application.


Transformers: TopologyProvider services

TopologyProvider implementations are plug-in-specific services for transforming component, links, and policies from an application model into an unresolved topology.

The transform step is a multi-step operation. First, the components are transformed. Associated extended policies are integrated into the component as extended attributes. Each component transformer takes the associated object from the application model as input, and returns a corresponding fragment of the topology document. The topology document and its fragments are JSON object documents. Links are transformed after the components. Links modify the component-generated topology documents, for example, parts and depends objects are added to the source roles. Each link transformer receives the topology fragments as input that is generated by the link source and target components. There are two types of transformers:


  1. Template-based implementations

    Most transforms can be implemented as a template of the intended JSON document (topology fragment for components; depends objects for links). IBM PureApplication System W1500 embeds Apache Velocity 1.6.2 as a template engine. For more information about Apache Velocity, see the user guide at the following location: http://velocity.apache.org/engine/devel/user-guide.html. Template-based implementations include:

    Component document

    The component name must match the "id" of the component, link, and policy defined in the plug-in appmodel/metadata.json file. Template files are specified as component properties, where the value is a path relative to the plug-in root. For example, the transformer for the sample starget component and link looks like:

    <?xml version="1.0" encoding="UTF-8"?>
    <scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" name="starget">
        <implementation class="com.ibm.maestro.model.transform.template.TemplateTransformer"/>
        <service>
        <provide interface="com.ibm.maestro.model.transform.TopologyProvider"/>
        </service>
        <property name="component.template" type="String" 
    value="templates/starget_component.vm"/>
        <property name="link.template" type="String"
    value="templates/starget_link.vm"/>
    </src:component>
    

    Implementation

    The sample starget_component.vm illustrates component transformation as follows:

    IBM Workload Deployer sets data in the Velocity context that the template author might reference by using variables, such as the $prefix and $attributes.st1 variables that are shown in the sample starget_component.vm that follows. For more information about the context, see the developer guide at the following location: http://velocity.apache.org/engine/devel/developer-guide.html.

    $attributes are attributes that are defined in the appmodel/metadata.json file for the component. These variables are passed in the context as follows. Review the Javadoc included in the Plug-in Development Kit (PDK) for more information about the corresponding Java. parameters.

    Note: Javadoc is included with the pdk.zip. Extract the javadoc.zip file and expand the file to get the Javadoc.

    applicationUrl

    The URL of the root of the application artifacts (within the storehouse). Includes deploymentId.

    transformer

    Reference to transformer object.

    attributes

    The attributes of the component or link object from the application model.

    config

    The current value of the parameters for the plug-in from the config.json file.

    provider

    A reference to this class.

    context

    A reference to the complete context; a VelocityContext object.

    prefix

    The prefix for the vm-template name to ensure uniqueness in names in the topology document.

    component

    The component object from the application model in component transforms. A JSONObject of the node from the appModel being transformed.

    {
        "vm-templates": [
            {
                "scaling":{
                        "min": 1,
                        "max": 1,
                },
                "name": "${prefix}-starget",
                "roles": [
                    {
                        'parms': {
                            "st1": "$attributes.st1"
                        },
                        "type": "starget",
                        "name": "starget"
                    } 
                ]
            } 
        ]
    }
    

    The sample starget_link.vm illustrates link transformation. The generated JSON goes into the depends array of the source role ssource in the sourceFragment vm-template. This ssource role depends on the $target.role in the $target.template. The following variables are passed in the context:

    sourceRole

    Fragment represents the additional depends objects for the source role.

    targetFragment

    Partial topology document that is associated with the target component for a link transform.

    ## Link templates render the depends objects to be added to the source role.
     
    ## sourceRole is required to locate the source of the link.  Value is the type of the source role.
    #set( $sourceRole = "ssource" )
     
    ## sourcePackages is an optional array.  Values in the array are added to the packages of the 
    ## vm-template that is hosting the source role.
    #set( $sourcePackages = ["pkg2"] )
     
    ## Obtain a tuple related to the matching target role: 
    ## target.template == vm-template that holds the target role; target.role == role ## String argument is the type of the target role.
    #set( $target = $provider.getMatchedRole($targetFragment, "starget") )
     
    ## Validate target.  If not found, throw HttpException 
    #if( $target == $null )
        $provider.throwHttpException("Target Role starget not found.")
    #end
     
    [
        {
            "role": "${target.template.name}.${target.role.name}",
            "type": "starget",
            "parms": {
                "sl1": "$attributes.sl1"
            }
        }
    ]
    

    The sample ssource_component.vm illustrates a more complex component transformation. The #if_value is a Velocimacro for conditional rendering of formatted strings: If $map contains a non-empty value for $key, then $format_str is evaluated and the value is available as $value.

    {
        "vm-templates": [
            {
            "scaling":{
                        "min": 1,
                        "max": 1
                },
                "name": "${prefix}-ssource",
                "roles": [
                    {
                        'parms': {
    ## Handling optional attributes:
    ## macro syntax:  #macro( if_value $map $key $format_str )
    ## String value:
                            #if_value( $attributes, "ss_s", '"ss_s": "$value",' )
    ## Number value:
                            #if_value( $attributes, "ss_n", '"ss_n": $value,' )
    ## Boolean value:
                            #if_value( $attributes, "ss_b", '"ss_b": $value,' )
    ## Missing value -- will not render:
                            #if_value( $attributes, "not_defined", '"not_defined": "$value",' )
     
    ## For artifacts, Inlet may send app model with absolute URLs for artifacts; 
    ## other request paths might invoke with relative URLs.  
    ## So use provider.generateArtifactPath(), which invokes URI.resolve() that handles both cases.
     
    ## Handling required attributes; throws an exception if the attribute is null/empty/not defined                         "ss_f": "$provider.generateArtifactPath( $applicationUrl, ${attributes.ss_s} )",
     
    ## Handling range value (ss3)
                            "ss_r_min":"$attributes.ss_r.get(0)",
                            "ss_r_max":"$attributes.ss_r.get(1)",
                            
    ## Handling policies:  spolicy is defined; not_policy is not #set( $spattrs = $provider.getPolicyAttributes($component, "spolicy") )
                            #if_value( $spattrs, "sp1", '"sp1": "$value",' )
                            #if_value( $spattrs, "not_defined", '"not_defined": "$value",' )
                            
    #set( $npattrs = $provider.getPolicyAttributes($component, "no_policy") )
                            #if_value( $npattrs, "np1", '"np1": "$value",' )
                            
    ## Handling required config parms; throws an exception if the parm is null/empty/not defined                         "cp1": "$config.cp1"
                        },
                        "type": "ssource",
                        "name": "ssource"
                    } 
                ]
            } 
        ]
    }
    

    Alternatively, #if_else_value adds a format_else parameter for cases when there is no value for $key in the $map, or when or the value of $key is null or an empty string. It is a good option for providing default values for optional attributes. In the Hello sample, hellocenter.vm includes an example of #if_else_value.

    "Users"   : #if_else_value($attributes, "Users", $attributes.Users.serialize(), []),
    
    If there are no users, Users is an empty JSONArray.

    # Render a formatted string if the mapped value exists and is not empty. 
    #macro( if_value $map $key $format )
     
    # Render a formatted string if the mapped value exists and is not empty, else a different string. 
    #macro( if_else_value $map $key $format_if, $format_else )
    

    Other available template features

    Use static Java static classes

    You can insert Java classes into the context with the $provider.getClassForName method. This feature is useful when static methods are used on these classes in your template. For example:

    #set( $Math = $provider.getClassForName("java.lang.Math") )
    "ss_r_math_max":$Math.max($attributes.ss_r.get(0), $attributes.ss_r.get(1)),
     
    ss_r
    
    is a range value, as defined in the plug-in appmodel/metadata.json file, which is a list with two long integer values. The lower range value is the first value in the list, with index 0. The upper range is the second value, with index 1. The previous example returns the upper range value, but is intended to show the usage of the java.lang.Math.max static method.

  2. Java implementations

    For cases where templates are not sufficient, Java implementations can be used. Java implementations can generate the JSON documents with the included JSON APIs (com.ibm.json.java.*) or by modifying templates. Another option is to use templates and enhance them with Java functions. See the section, Enhancing template transforms with Java code, for details. This alternative is preferred.

    • Component document

      The component name must match the ID of the component, link, and policy that are defined in the plug-in appmodel/metadata.json file. For example, the transformer for the web archive (WAR) component is as follows:

      <?xml version="1.0" encoding="UTF-8"?>
      <scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" name="WAR">
          <implementation class="com.ibm.maestro.model.transform.was.WARTransformer"/>
          <service>
            <provide interface="com.ibm.maestro.model.transform.TopologyProvider"/>
          </service>
      </src:component>
      
    • Implementation

      Implementations extend com.ibm.maestro.model.transform.TopologyProvider and can implement component and link transformations by overriding the corresponding methods:

      public JSONObject transformComponent(
          String vmTemplateNamePrefix,
          String applicationUrl, 
          JSONObject applicationComponent, 
          Transformer transformer)
        throws Exception {
        return new JSONObject();
      }
       
      public void transformLink(
          JSONObject sourceFragment,
          JSONObject targetFragment,
          String applicationUrl,
          JSONObject applicationLink,
          Transformer transformer)
        throws Exception {
      }
      

      • Invoke templates

        Java implementations can start templates by using the following methods of TopologyProvider:

        public static JSONArtifact renderTemplateToJSON(Bundle b, String template, String logTag, Context context) throws HttpException;
         
        public static String renderTemplate(Bundle b, String template, String logTag, Context context) throws HttpException;
        
        For example, the WAS transformer starts a template to generate the topology fragment for a WAS instance as follows:

        protected void activate(ComponentContext context){    
          _bundle = context.getBundleContext().getBundle();
        }
         
        @Override
        public JSONObject transformComponent(String prefix, String applicationUrl, JSONObject component, Transformer transformer) throws Exception {
          JSONObject topology;
          JSONObject scalingPolicy = getPolicy(component, "ScalingPolicyofWAS");
          String vmTemplateName = prefix + "-was";
         
          if (scalingPolicy == null) {
            VelocityContext context = new VelocityContext();
            context.put(TemplateTransformer.PREFIX, prefix);
            context.put(TemplateTransformer.APPLICATION_URL, applicationUrl);  // Value ends with a slash.
            context.put(TemplateTransformer.COMPONENT, component);
            
            JSONObject attributes = (JSONObject) component.get("attributes");
            context.put(TemplateTransformer.ATTRIBUTES, new RequiredMap(attributes));
            context.put(TemplateTransformer.CONFIG, new RequiredMap(getConfigParms()));
            context.put(TemplateTransformer.PROVIDER, this);
            
            String logTag = "WAS:templates/SingleWAS.vm";
            topology = (JSONObject) renderTemplateToJSON(_bundle, "templates/SingleWAS.vm", logTag, context);
        


Define external storage

Plug-ins can define external storage with a storage-templates element in either a Velocity based or Java based transformer. The storage-templates element is at the same level as vm-templates in the topology document.

{
   "vm-templates":[.],
"storage-templates":[
{
"parms":{
"size":4,
"format":"ext3",
"type":"auto"
},
"name":"db2-storage"
}
]
}

Within the storage-templates element, you define parameters for the size, format, and type of storage with the parms element. Supported parameters:

size
Required storage size in GB.
format
The only value is ext3 for the third extended file system format.
type
Supported values are auto or fixed.

You must specify a name for the storage with the name element. In the storage-templates example, the storage is named db2-storage.

You can reference a storage template in the vm-templates element. Within a storage array, you reference each storage name with the storage-ref element. The following example shows a DB2 vm-template element that requests to add the disk with the name db2-storage by mounting the disk to the specified mount point /home/db2inst1.

   "storage":[
{
"storage-ref":"db2-storage",
"device":"\/dev\/sdb",
"mount-point":"\/home\/db2inst1"
}
]


Enhancing template transforms with Java code

Template transforms are recommended. If you need Java code, do as much as you can with the template, and then add Java methods that you start from your template as follows:

  1. Create your Java class and have it extend TemplateTransformer.
  2. Add your Java methods. The public methods can be started from your template by using $provider.myMethod(). You can pass parameters into the Java methods.
  3. Update your OSGi component document to set the implementation class to your new Java class, not TemplateTransformer. There are two methods for updating the velocity context:

    • The velocity context is available in the velocity context, therefore you can pass it into a Java method by using $context.
    • Implement the protected VelocityContext createContext(String applicationUrl, JSONObject component) method. In your method, call super.createContext(applicationUrl, component). The returned VelocityContext is the velocity context, to which you can add your custom objects.

      Be careful not to overwrite any existing keys.

The following code example illustrates enhancing the templates:

osgi.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<scr:component xmlns:scr="http://www.osgi.org/xmlns/src/v.1.1.0" name="WASDB2">
  <implementation class="com.ibm.maestro.model.transform.wasdb2.WASDB2LinkTransform"/>
  <service>
    <provide interface="com.ibm.maestro.model.transform.TopologyProvider"/>
  </service>
    <property name="link.template" type="String" value="templates/wasdb2_link.vm"/>
</scr:component>
  

Java file:

package com.ibm.maestro.model.transform.wasdb2;
 
import com.ibm.maestro.common.http.HttpException;
import com.ibm.maestro.model.transform.template.RequiredMap;
import com.ibm.maestro.model.transform.template.TemplateTransformer;
 
public class WASDB2LinkTransform extends TemplateTransformer {
 
    public static JndiNameResourceRefs getJndiNameAndResourceRefs(RequiredMap attributes)
            throws HttpException {
        return JndiNameResourceRefs.getJndiNameAndResourceRefs(attributes);
    }
 
}


Plug-in components available as OSGi Declarative Services

PureApplication System provides limited support for OSGi services within plug-ins. Specifically, plug-ins can provide implementations of specific PureApplication System service interfaces such as:

Kernel services manage multiple versions of these services according to the plug-ins associated with the application model. However, version management does not apply to other services, so errors might occur if a plug-in exports another service implementation.


Shared services

A shared service is a predefined pattern deployed and shared by multiple client application deployments, such as virtual applications and virtual systems, in the cloud. A shared service provides certain runtime services to multiple applications or services to the user on behalf of multiple applications. You usually find only one single reference to the shared service per cloud group, which is a physical group of hardware that defines a cloud. That shared service can be used by all application deployments in the cloud group, which means the shared service must provide multi-tenant access.

The infrastructure provides a common administration framework and registry services. A single reference to a shared service is allowed in each cloud group. UI panels enable deployment and management of shared services. There are two types of shared services that can be created...

A registry contains information about deployed shared services in each cloud group. The client of a shared service can query the registry for information about the shared service it wants to interact with. The shared service implementation determines what information the client requires and also what client API versions it can support.

A shared service is developed similar to plug-in development for virtual applications. There is extra metadata and capabilities in some scenarios.

To a develop a shared service...

  1. Create the predefined application model and property metadata.

    The appmodel.json file represents the serialization of the model defined in the Virtual Application Builder user interface for a regular virtual application. Components (nodes) and links, along with user-specified property values, are represented. For shared services, the properties must be predefined.

    In addition to nodes, links and other attributes of an application model, the following attributes are required for a shared service:

    app_type Set to "service".
    serviceversion Unique shared service application model version in VRMF format. For example, 1.0.0.0.
    servicesupportedclients List of supported client versions that can use this shared service. Example patterns...

    • *: matches all versions
    • [a,b]: matches all versions between a and b, including a and b
    • (a,b): matches all versions between a and b, excluding a and b
    • [*,b]: matches all versions up to b, including b
    • [a,*]: matches all versions a and greater

    servicedisplayname Display name for similarly grouped services.
    servicename Name of the service. Used as the name of the service registry documents.
    id Set to "sharedservice". This attribute is an attribute on a node.
    type Unique ID that links the transformer and appmodel metadata to this node.
    servicetype Set to External in the appmodel.json to identify a service as an external shared service.

    The following example shows the use of these attributes:

    {
       "model":{
        "name":"Shared Service",
        "app_type":"service",
        "patterntype":"foundation",
        "version":"1.0",
        "serviceversion":"1.0.0.0",
        "servicesupportedclients":"[0.0,1.0]",
        "servicedisplayname":"servicegroup",
        "servicename":"service",
        "description":"comments",
        "nodes":[{
           "attributes":{.},
          "id":"sharedservice",
          "type":"uniqueComponentOSGIID"
        }],
        "links":[]
      }
    }
    

    The appmodel/metadata.json file describes the components, links, and policies that are implemented by the plug-in. Shared services use the same attribute design. Default attributes can be set by setting the specific attribute inside the predefined appmodel or by using the sampleValue field inside the metadata.json.

  2. Define a registry provider class.

    The shared service registry contains information that clients can look up to find information that is shared by the service about itself. The infrastructure provides this ability through the shared service specific implementation of the com.ibm.maestro.iaas.RegistryProvider class. The following method allows the shared service to return information to the client based on its model and deployment configuration.

      public JSONArtifact getRegistry(String ClientVersion, Map<String, JSONObject> deploymentInfo throws HttpException;

    deploymentInfo contains the appmodel.json, deployment.json, topology.json, and registry.json documents.

  3. Create the topology template for the appmodel.

    Transformers are services that convert the application model from a logical description into a topology document used to deploy the virtual application. Shared services (like other plug-ins) can define a template for a topology document and transformers that convert the template to an actual topology document during deployment. The following attribute must be provided by a shared service to reference the shared service registry provider class:

      "service-registry": [{ "type": "<RegistryProvider implementation>" }],

    Do not include the vm-templates attribute section for an external shared service topology template since it is pointing to an external resource implementation of the shared service.

  4. Create the shared service lifecycle scripts.

    For more information about developing lifecycle scripts, see the Developing lifestyle scripts section of the plug-in development guide. When you develop lifecycle scripts for shared services, consider functions such as being able to stand up and recover from failures, providing administrative operations for the service, scalability, and other similar functions.

  5. Optional. Make a shared service public certificate available for client access.

    The infrastructure provides a central location for certificates that a shared service must make available securely to deployments that act as clients to it. The following com.ibm.maestro.iaas.RegistryService APIs can be called by the shared service to manage its certificates:

    public void putCertificate(RestClient restClient, String putObj, String cloudGroup, String sharedServiceName, String sharedServiceVersion) throws CredentialExpiredException, MaestroSecurityException, HttpException;

    public void deleteCertificate(RestClient restClient, String cloudGroup, String sharedServiceName, String sharedServiceVersion) throws CredentialExpiredException, MaestroSecurityException, HttpException;

  6. Expose administrative HTTP calls for the clients to interact with the service.

    Each shared service must expose HTTP administrative interface for the clients to easily register and interact with the service (reserve resources on service). Then, the usage of the service can be custom to that service (by using the reserved resources over non-HTTP interaction). This step is where the client version helps determine the client contract for the administrative HTTP interaction before it uses the service.

    The Shared Service Infrastructure framework provides a helper feature to easily create the http administrative interface if the shared service would like to use the framework more. For more information, see the section "Generic Shared Service REST API support and client interaction".


Shared Service Client development

Application deployments can enable a shared service consumer model:

The infrastructure supports predeployment service reference checks that are done on behalf of the shared service client and can stop the deployment from creating resources in cloud.

Post deployment, when the client plug-in packages are extracted on the deployed virtual machines, it can interact with the shared service through its lifecycle scripts. The infrastructure provides a maestro.registry python library for registry and certificate related method calls from lifecycle scripts.

import maestro
parms=maestro.registry.getRegistry(shared_service_name, shared_service_client_version)
masterIP = parms['<registry parm name>']

The removeRegistryRef call can be used when the client no longer wants to be connected to the shared service, or when the client is about to be deleted:

import maestro
maestro.registry.removeRegistryRef(shared_service_name, shared_service_client_version)

If the shared service exposed a public certificate in the shared service framework, then the client can obtain that certificate with the following command:

import maestro 
maestro.registry.getCertificate(shared_service_name, shared_service_client_version, temp_cert_file)

The following example shows how to use the commands above in a lifecycle script. The script displays the IP address of a deployed caching service, downloads the public certificate, and provides a signal to the service that the script has finished running.

import maestro
 
service_name = "caching"
client_version = "2.0"
 
# get registry information registry_info = maestro.registry.getRegistry(service_name, client_version)
ip_address = registry_info['cache-ip']
 
print "Caching service found at " + ip_address
 
 
# this is where the certificate will be stored
certificate_location = "/tmp/caching_certificate"
 
# download certificate maestro.registry.getCertificate(service_name, client_version, certificate_location)
 
# signal the service that we have finished interacting with it maestro.registry.removeRegistryRef(service_name, client_version)

Shared services providers must clearly document the client interaction model that is based on capabilities the service provides. Document this interaction by using the client version, which becomes a client contract that defines exactly what to expect for that version. Different shared service versions might return different information through the registry to the client. The client service provisioner and lifecycle scripts must understand the right way to interact with the service based on the registry response. For example, consider a shared service version 1.0 returning a host, user name, and password attributes through a getRegistry call to its client of version 1.0. If a port must be added as well to the response, the service can indicate that it will for clients of version 2.0, which are written to handle the additional attribute. In its next revision, the service can then support both clients of version 1.0 and 2.0 providing each with expected responses.

The version of the client also helps the shared service infrastructure determine which shared service versions the client can interact with in a cloud group. For example, if shared service version 1.0 supports only client version 1.0, then the infrastructure does not return a registry object to a client of version 2.0 requesting the registry.


Shared Service client tracking and operation invocation

Certain shared services might require the capability to be able to track clients that connect to it. This capability might be needed so that the shared service can communicate its start or delete lifecycle events to the client. The client might choose to stop requesting information from the shared service when the client is deleted, for example. This ability is enabled by including the following metadata in the shared service vm template.

"service-registry": [{
        "type": "<RegistryProvider implementation>"
        "trackclientusage":true
        
}],
If this metadata is specified, the infrastructure maintains a list of clients that request registry information from the service. The infrastructure also provides the capability for the service to call operations on clients that are tracked to be using it. The operation is defined on the client: public void callOperationOnSSClients(RestClient restClient, JSONObject serviceInfo, JSONObject operationParms) throws HttpException;

This example shows you how to call the operation from the shared service ServiceProvisioner

public StateAndDescription deleteService(String serviceReferenceName,
      JSONObject serviceDescription, RestClient restClient)
      throws Exception {
        final String METHOD_NAME = "deleteService";
        if (logger.isLoggable(Level.FINE)) {
            logger.logp(Level.FINE, CLASS_NAME, METHOD_NAME, "deleteService: " + serviceReferenceName + "/" + serviceDescription);
            logger.logp(Level.FINE, CLASS_NAME, METHOD_NAME, "Calling the disconnect operation on shared service clients");
        }
        
        JSONObject operationParms = new JSONObject();
        operationParms.put("role","<role_name>");
        operationParms.put("type", "<type_name>");
        operationParms.put("script", "<scriptname>");
        operationParms.put("method", "<method_name>");
        operationParms.put("parameters", new JSONObject());
        this.registrySvc.callOperationOnSSClients(restClient, serviceDescription, operationParms);


Generic Shared Service REST API support and client interaction

The Generic Shared Service REST infrastructure helps provide a common HTTP-based interaction model for clients to start methods that are exposed by shared services. Shared services can provide operation metadata and python scripts that implement the method. The client can call the generic shared services REST APIs to start the operations. For example, a client might call GET on the REST URL: https://<Master_Agent_VM>:9999/sharedservice/<servicename>/<resource>

Follow these steps to configure the server-side metadata and setup.

  1. Service metadata must be provided in a JSON file and packaged in the shared service nodepart data. For example: plugin/nodeparts/cachingrestapi/cachingrestapi/properties/cachingrestapi.json.

    The JSON object in the metadata must contain the following attributes:

    servicename

    The name of the shared service: "caching" or "elb".

    operations

    JSON array of operations that is exposed by the service. Each object in this array defines the following attributes:

    type

    HTTP operation type: GET, PUT, POST, or DELETE.

    parms

    JSON array of objects that represents parameters for each call of the type. The object must define the following attributes:

    resource

    Resource on which operation is carried out. Maps to URL segment after the servicename on the REST URL.

    role

    The role this operation is to run against.

    script

    <script_name> <method_name>. The Python script that defines the operation and the method that starts.

    clientdeploymentexposure

    Optional. Determines whether the call can originate from client deployed VMs in addition to from the KernelServices process. The default value is false.

    timeOut

    Optional. Operation timeout. The default value is 60000. If this attribute is 0, The operation runs synchronously with no timeout. If this attribute is greater than 0, the operation waits the specified amount of time, in milliseconds, for this call to return. The operation responds with an HTTP 202 response if a timeout occurs.

    pattern

    This attribute can be used to match further segments of the URL to feed further parms into the operation. An entry that is wrapped in curly braces matches as a name-value pair against the incoming URL and passes into the operation as a parms. For example, consider a REST URL of the form: https://<host>:<port>/sharedservices/caching/sessiongrid/gridA and a pattern {gridName}. The name and value pair gridName=gridA feeds into the operation as an additional argument.

    Here is an example metadata json:

    {
      "servicename": "caching", 
      "operations": [
        {
          "type": "PUT", 
          "parms": [
            {
              "resource": "sessionGrid", 
              "role": "Caching-Master.Caching", 
              "script": "restapi.py createSession",
              "timeout": 1200000
            },
            {
              "resource": "simpleGrid", 
              "role": "Caching-Master.Caching", 
              "script": "restapi.py createSimple"
            },
            {
              "resource": "dynamicGrid", 
              "role": "Caching-Master.Caching", 
              "script": "restapi.py createDynamic"
            }
          ]
        }, 
        {
          "type":"DELETE",
          "parms":[
            {
              "resource": "sessionGrid", 
              "role": "Caching-Master.Caching", 
              "script": "restapi.py deleteSession",
              "pattern": "{gridname}"
            }, 
            {
              "resource": "simpleGrid", 
              "role": "Caching-Master.Caching", 
              "script": "restapi.py deleteSimple",
              "pattern": "{gridname}"
            },
            {
              "resource": "dynamicGrid", 
              "role": "Caching-Master.Caching", 
              "script": "restapi.py deleteDynamic",
              "pattern": "{gridname}"
            }
          ]
        },
        {
          "type":"GET",
          "parms":[
            {
              "resource": "sessionGrid", 
              "clientdeploymentexposure":true,
              "role": "Caching-Master.Caching", 
              "script": "restapi.py gridExists",
              "pattern": "{gridname}"
            },
            {
              "resource": "publicCert", 
              "clientdeploymentexposure":true,
              "role": "Caching-Master.Caching", 
              "script": "restapi.py downloadCert",
              "responseType": "zipFile"
            }
          ]
        }
      ]
    }
    

  2. During VM start of the shared service deployment, the metadata must be copied to the following location on the VM: /0config/sharedservices/restmetadata/. This metadata can be copied by providing a nodepart installation script that runs during the deployment. For example: /plugin/nodeparts/cachingrestapi/common/install/7_cachingrestapi.py that contains

    restapidir = '/0config/sharedservices/restmetadata'
    restapifile = '/cachingrestapi.json'
    restapi = '../../cachingrestapi/properties/cachingrestapi.json'
     
    os.popen('mkdir -p ' + restapidir)
    os.popen("mv " + restapi+ " "+ restapidir + restapifile)
    

  3. Provide the implementation of the methods in a python script. Define operations which run based on parms from the different sources, input JSON object to the REST call or URL pattern matching. For example: parts/caching.scripts/scripts/CachingMaster/restapi.py. The client can call the APIs through the following convenience methods:

    • If you call operations on the service from within the KernelServices process (through a Service Provisioner), use the following method defined on com.ibm.maestro.iaas.RegistryService

      public OperationResponse callOperationOnSharedService(String serviceName, String clientCloudGroup, String clientVersion, 
      String serviceIP, String resourceUrl, String operationType, JSONObject operationParms) throws HttpException;
      

      For example:

      JSONObject clientJSON = new JSONObject();
            clientJSON.put("user", user);
            clientJSON.put("password", password);
            clientJSON.put("gridname", grid);
            clientJSON.put("gridcap", cachingCap);
       
            OperationResponse response = this.registrySvc.callOperationOnSharedService("caching", (String)serviceDescription.get("cloud_group"), "3.0", ip,
            "sessionGrid", "PUT", clientJSON);
            int status = response.getStatusCode();
            JSONObject result = response.getOperationResponse();
      

    • For virtual application deployments, the shared services infrastructure provides a utility to interact with shared services. This utility can be accessed within lifecycle scripts.

      The utility script provides only one method, callSSrestapi: sharedservices.callSSrestapi(url, method, data=None, filepath_input=None, filepath_output=None). This method makes a REST API call to the specified URL with the specified method. This function returns the HTTP status code of the call and the returned JSON document, if one was provided.

      The caller can set the filepath_input parameter to store the returned file at the specified location (assuming the directory exists). This parameter is expected if the Shared Service returns a compressed file from this call. Note, if filepath_input is not set, any response from the Shared Service will be deleted after it parsed.

      If the caller must send a file to the shared service, the filepath_output parameter can be set to the location of this file, but the method argument must be PUT or POST to send the file. If PUT or POST methods are used, either the data or filepath_output argument is expected.

      To access this function, you must include the following code in the lifecycle script:

      import sys
      ss_path = '/0config/nodepkgs/helper/scripts'
      if not ss_path in sys.path:
          sys.path.append(ss_path)
      import sharedservices
      

      This example script uses the utility functions to interact with the caching shared service. It retrieves the public certificate and checks whether a named caching grid exists.

      import sys
       
      ss_path = '/0config/nodepkgs/helper/scripts'
      if not ss_path in sys.path:
          sys.path.append(ss_path)
      import sharedservices
       
      service = "caching"
      version = "2.0"
       
      regInfo = maestro.registry.getRegistry(service, version)
      try:
          ipAddr = regInfo['cache-ip']
          print "caching service found at " + ipAddr
      except KeyError:
          print "caching service not found"
          sys.exit(1)
       
       
      cert_zip_location = "cert.zip"
       
      # download zip (that contains the certificate)
      cert_response = sharedservices.callSSrestapi("https://" + ipAddr + ":9999/sharedservice/caching/publicCert", 
      "GET", filepath_input=cert_zip_location)
       
       
      # check if a grid exists
      grid_name = "SomeGrid"
       
      http_code, grid_response = sharedservices.callSSrestapi("https://" + ipAddr + ":9999/sharedservice/caching/gridExists/", "GET")
      ret =  grid_response['OperationResults'][0]['return_value']
       
      if (ret.find(grid_name + ' does not exist') >= 0):
          print grid_name + " does not exist"
      else:
          print grid_name + " exists"
       
      # inform the caching service that we have finished interacting with it maestro.registry.removeRegistryRef(service, version)
      

    Manual testing of the API is possible through the following on the deployed VM:

    cd /0config  
    export $(./get_userdata.sh)  
    export header=$(/opt/python-2.6.4/bin/python create_security_header.py)
    curl -H "X-IWD-Authorization : $header" -kv -H Content-Type:application/json -X PUT .d '{"input":"hi"}' 
    https://<shared service IP:9999/sharedservice/<servicename>/<resourcename>
    


Virtual System client interaction with shared services

The shared services infrastructure also provides support for virtual system deployments to be shared services clients. The following libraries and APIs can be called through script packages on virtual systems to query for the shared service registry, start REST APIs exposed by the shared service, and other functions.

The shared services plug-in for virtual systems requires that maestro is installed on a deployed virtual system. By default, maestro is not enabled for such deployments. The steps for enabling maestro are:

  1. Click Cloud > System Plug-ins. from the dashboard.
  2. Select the Foundation pattern type.
  3. Select the virtualsystem plug-in.
  4. Click configure.
  5. Set the Plug-ins for virtual systems to enabled.

After you follow these steps, any deployed virtual system patterns install maestro and include the shared services plug-in for virtual systems.

The shared services client is a node package that attempts to mimic the virtual application lifecycle scripts in function. However, there are a few significant differences in their operation and how they are called.

To interact with shared services, you must import the shared service client script:

import sys
ss_path = '/0config/nodepkgs/helper/scripts'
if not ss_path in sys.path:
    sys.path.append(ss_path)
import sharedserviceclient

After you import this script, it can access to the following functions:

sharedserviceclient.getRegistry(service_name, client_version)

This function returns registry information of the named shared service, if the client version is supported. Unlike maestro.registry.getRegistry, this function returns the response as a text string.

sharedserviceclient.removeRegistryRef(service_name, client_version)

This function is called when a client finishes interacting with the named shared service, which signals the service to clean up any obsolete resources. If this operation is successful, the return value is 0.

sharedserviceclient.restapi(url, method, data=None)

This function calls a REST API method, by using curl, to interact with the shared service at the provided URL. Accepted methods are GET, DELETE, PUT and POST. If the method is PUT or POST the data parameter is required.

The data argument must be a text string. Similarly, this function returns the response from the service as a text string.

The following example shows how to interact with the caching shared service to see whether a named data grid exists:

import sys
ss_path = '/0config/nodepkgs/helper/scripts'
if not ss_path in sys.path:
    sys.path.append(ss_path)
import sharedserviceclient
 
 
def find_cache_IP(reg_info):
    """
Finds the Caching Shared Service IP address in the registry information.
    """
    str_info = str(reg_info)
    
    start = 'cache-ip":"'
    end = '",'
    
    start_cip = str_info.find(start)
    end_cip = str_info.find(end, start_cip)
    
    return str_info[start_cip+len(start):end_cip]
 
def does_grid_exist(grid_name, caching_ip):
    """
Queries the Caching Shared Service to see if a grid exists.
Returns '1' if the grid exists, '0' otherwise.
    """
    str_reply = str(sharedserviceclient.restapi("https://" + cip + ":9999/sharedservice/caching/gridExists/" + grid_name, "GET", ""))
    
    if (str_reply.find(grid_name + ' does not exist') >= 0):
        return 0
    else:
        return 1
 
 
# figure out the caching service's IP address reg = sharedserviceclient.getRegistry("caching", "2.0")
cip = find_cache_IP(reg)
 
# grid to read data from grid_name = "WAS_GRID"
 
# ensure the grid exists
if (does_grid_exist(grid_name, cip)):
    print "Reading data from grid..."
    # read data from it else:
    print "Grid '" + grid_name + "' doesn't exist"
    # ask somebody to create it  
# done with caching server sharedserviceclient.removeRegistryRef("caching", "2.0")


Application modeling (appmodel/metadata.json)

The appmodel/metadata.json file describes the components, links, and policies that are implemented by the plug-in.

Other files in this directory are referenced from metadata.json, primarily image files for icons to display in Virtual Application Builder. The metadata describes how Virtual Application Builder visually displays components, links, and policies and determines which configuration parameters are exposed to users.


Define components and links

The following examples show how properties are defined for a component and a link.

Figure 1. Component example from the WAS plug-in

[
   {
      "id":"WAS",
      "label":"Web Application (WAS)",
      "description":"Web application on a WAS instance",
      "type":"component",
      "thumbnail":"appmodel/images/thumbnail/WAS.gif",
      "image":"appmodel/images/WAS.gif",
      "attributes":[
         {
            "id":"archive",
            "label":"WAR/EAR File",
            "description":"Name of the WAR file",
            "type":"file",
            "required": true
         } ,
         {
            "id":"WAS_Version",
            "label":"WAS Version",
            "description"
              :"Version of WAS",
            "sampleValue":"7.0",
            "type":"string",
            "options":[
               {
                  "name":"WAS 7.0",
                  "value":"7.0"
               }
            ]
         }
      ]
   }
...
]

Figure 2. Link example from the wasdb2 plug-in

[
   {
      "id":"WASDB2",
      "label":"Data source of DB2",
      "type":"link",
      "source":[
         "WAS"
      ],
      "target":[
         "DB2"
      ],
 
 
      "attributes":[
         {
            "id":"jndiDataSource",
            "label":"JNDI Name of Data Source",
            "type":"string",
            "required": true
         },
         {
            "id":"XADataSource",
            "label":"Two-Phase TX Support",
            "description":"DataSource implementation type",
            "type":"boolean",
            "sampleValue":true
         }
      ]
   }
]

Metadata is an array of widget-def objects with the following configuration parameters:

id
Required. Must be unique across all plug-ins within the pattern type.
label
Required. Label for the entry in the Assets pane of the Virtual Application Builder.
description
Optional. Hover help for the entry in the Assets pane of the Virtual Application Builder.
type
Required. The type of item you are defining. Valid values are as follows:

  • component
  • link
  • policy

thumbnail
Optional. URL for 48x48 icon (relative to the plug-in).
image
Optional. URL for 64x64 icon (relative to the plug-in).
attributes
A list of attribute-def objects, described in the next section.
applicableTo
For the policy type only. The ID of the component type to which the policy can be associated. The Virtual Application Builder does not allow other associations. Use a wildcard, *, to allow the policy to be associated with any component type.
applicationPolicy
For the policy type only. If this attribute is set to true, the policy displays only in the policies that are available at the application level, and the policy cannot be associated with a component. Valid values are as follows:

  • true
  • false (default)

source
For the link type only. ID of the component type from which the link can be created. Use a wildcard, *, to allow a link to be created from any component type, other than the component itself, to this component.

Note: The wildcard can be used only for the source or the target of a component, not both.

target
For the link type only. ID of the component type to which the link can be created. Use a wildcard, *, to allow a link to be created from any component type, other than the component itself, to this component.

Note: The wildcard can be used only for the source or the target of a component, not both.


Attributes of components, links, and policies

The following properties can be defined for attribute-def objects:

builder
Optional. Specifies whether the attribute displays in the property sheet in the Virtual Application Builder. Valid values are as follows:

  • true (default)
  • false

deployable
Optional. Specifies whether the attribute displays on the Virtual Application Deploy settings page. Valid values are as follows:

  • true
  • false (default)

description
Optional. Description of the property the user can configure.
deprecated
Optional. Use to flag a particular attribute as deprecated. The user receives a message that the attribute is deprecated when the application loads. After the application loads, the attribute displays with (Deprecated) next to the label. The user can click remove to remove the attribute from the application. Valid values are as follows:

  • true (default)
  • false

displayType
Optional. Specifies how the properties are displayed in the user interface. If you do not specify a value for displayType, the default for the specified type is used. Valid values are:

  • radio
  • multi-string (for a multi-line text area)
  • password
  • percentage

If the type is set to array, the display of the array depends on whether options is specified.

  • If options is specified, a list box is displayed and users can select from the specified values.
  • If options is not specified, a multi-line text area is displayed and array values that are entered by the user must be separated by a comma (,).

Extensions
Optional. Array of strings to indicate a valid file extension. Only used when type is file.
id
Required. Must be unique within the component, link, or policy that you are defining.
invalidMessage
Optional. Message to display if the value specified by the user is not valid.

Note: A default message displays if the invalidMessage is not set, and the user specifies a value that is outside of the min and max limits.

label
Required. Label that is displayed in the property sheet in the Virtual Application Builder.
max
Optional. Maximum value for type range.

Note: A default message displays if the invalidMessage is not set, and the user specifies a max value that is outside of this max limit.

maxOccurs
Optional. Number of links that can be created between a source component and a target component. If maxOccurs is not used, or if the value is set to -1, then an unlimited number of links can be created between the source component and the target component. If a value other than -1 is specified, the number of links between the source component and the target component cannot exceed the specified value.
min
Optional. Minimum value for type range.

Note: A default message displays if the invalidMessage is not set, and the user specifies a min value that is outside of this min limit.

options
Optional. An array of objects for property values that the user can choose in the user interface. As shown in the WAS plug-in example, the keys are name and value.
referenceKey
Optional. A logical name for a reference resource defined in a WAR, EAR, or OSGi file. For more information about resource references, see Resource references.
referenceType
Optional. Used to identify the type of resource reference in a WAR, EAR, or OSGi file. For more information about resource references, see Resource references.
regExp
Optional. Regular expression for input value of type string. Used to test for valid input.

Note: If you are using a special character in the regular expression, such as \d, you must use an escape character: \. For example, "regExp": "\\d{1,3}.\\d{1,3}.\\d{1,3}.\\d{1,3}".

required
Optional. Specifies whether the property is required. Valid values are as follows:

  • true (default)
  • false

type
Required. The type of property. The following types are supported:

array
A list of values, which are separated by commas. The values must be strings. For example, ["item 1", "item 2", "item 3"].
boolean
Valid values are true or false.
string
Text that is enclosed in quotation marks. For example, "item 1".
number
A number. For example, 3.1415926.
file
A file the user uploads. A string that specifies the path and file name. For example, "./archive/test.war".
range
A range of values. Used with min and max properties.
object
An object. Valid value must be a JSON object.
sampleValue
Optional. Default value for the property.

The following examples show how different attributes are displayed in the interface.


Attribute examples

type displayType options constraints Example
array


A multi-line text box is displayed and user-entered values must be separated by a comma.

array
options

A list box is displayed the user can select one or more values.

boolean

required
string

regExp required
string
options required
string radio options required
string password
regExp required
string multi-string
required
number

required min max
number percentage
required min max
file

required extensions
range

required min max incremental
range


required
min
max
incremental


Attribute groups

All attributes of components, links, and policies can be grouped in appmodel/metadata.json. A group can be enabled or disabled. If one group is disabled, all attributes within it are deleted from the virtual application.

The following example shows an attribute group within the WAS scaling policy:

{
        "id": "ScalingPolicyofWAS",
        "label": "SCALLING_LABEL",
        "type": "policy",
        "applicableTo": [
            "WAR",
            "EAR",
            "OSGiEBA"
        ],
        "thumbnail": "appmodel/images/thumbnail/ClusterPolicy.png",
        "image": "appmodel/images/ClusterPolicy.png",
        "helpFile": "",
        "description": "SCALLING_DESCRIPTION",
        "groups": [
            {
                "category" : "SCALE_POLICY_TYPE",
                "id" : "None",
                "label": "SCALE_POLICY_TYPE_NONE" ,
                "defaultValue" : true,
                "attributes": [
                   "intialInstanceNumber"
                ]
            } ,
            {
                "category" : "SCALE_POLICY_TYPE",
                "id" : "Basic",
                "label": "SCALE_POLICY_TYPE_BASIC",
                "defaultValue" : false,
                "attributes": [
                    "CPU.Used.Basic.Primitive" ,
                    "scaleInstanceRange1" ,
                    "triggerTime1"
                ]
            } ,
            {
                "category" : "SCALE_POLICY_TYPE",
                "id" : "WebIntensive",
                "label": "SCALE_POLICY_TYPE_WEBINTENSIVE",
                "defaultValue" : false,
                "attributes": [
                    "WAS_WebApplications.MaxServiceTime.WebIntensive.Primitive",
                    "scaleInstanceRange2" ,
                    "triggerTime2"
                ]
            } ,
            {
                "category" : "SCALE_POLICY_TYPE",
                "id" : "WebToDB",
                "label": "SCALE_POLICY_TYPE_WEBTODB",
                "defaultValue" : false,
                "attributes": [
                    "WAS_WebApplications.MaxServiceTime.WebToDB.Primitive" ,
                    "WAS_JDBCConnectionPools.WaitTime.WebToDB.Primitive",
                    "WAS_JDBCConnectionPools.PercentUsed.WebToDB.Primitive" ,
                    "scaleInstanceRange3" ,
                    "triggerTime3"
                ]
            }
        ],
        "attributes": [
            {
                "id": "CPU.Used.Basic.Primitive",
                "label": "SCALE_ATTRIBUTES_CPU",
                "type": "range",
                "displayType": "percentage",
                "required": false,
                "max": 100,
                "min": 1,
                "sampleValue": [
                    20,
                    80
                ],
                "description":"SCALE_ATTRIBUTES_CPU_DESCRIPTION"
            },
           ......
        ]
    },
Properties of the group object are as follows:

id
Required. Must be unique within the component, link, or policy that you are defining.
label
Required. Label that is displayed in the pane in the Virtual Application Builder.
attributes
Required. List of attribute-def object IDs within this component, link, or policy.
category
Optional. A category that is associated with the group. Groups with the same category name are exclusive from each other.
required
Optional. Specifies whether a group is required. A group with required = true cannot be disabled. Valid values are:

  • true
  • false


Handling an uploaded file

For the file type, your plug-in must include handling of the file that a user uploads.

When a file is uploaded by a user, it is stored in the Storehouse. When the application model is transformed to a topology document, the vm-template definition must include a parameter for the file. This parameter enables the lifecycle script on the deployed virtual machine to retrieve the file on the deployed virtual machine, and extract the contents if necessary.

For example, this metadata.json example defines an archive file.

"attributes"  : [
  {
  "id"          : "archive",
  "label"       : "ARCHIVE_LABEL",
  "description" : "ARCHIVE_DESCRIPTION",
  "type"        : "file",
  "required"    : true,
  "extensions"  : [
  "zip","tgz","tar.gz" 
  ] 
},

The vm-template then includes an ARCHIVE parameter that references the attribute with the ID archive.

 "roles"    : [
  {
  ................
  "name"         : "",
  "type"         : "",
  "parms":{
  "ARCHIVE"   : "$provider.generateArtifactPath( $applicationUrl, ${attributes.archive} )",
  ................

A lifecycle script on the deployed virtual machine can then reference the ARCHIVE parameter to retrieve the file. In this example, the maestro.downloadx utility downloads and extracts the file. You can also use maestro.download to download the file without extracting contents.

ARCHIVE = maestro.parms['ARCHIVE']
# Download and unzip archive file logger.debug("Starting to download the archive file...")
maestro.downloadx(ARCHIVE, deployDir)


Define interim fixes

Users can upload an IBM interim fix and add it to the catalog as an emergency fix. An interim fix that is available in the catalog can be added to a virtual application pattern by using the Virtual Application Builder so that it is installed during deployment. Currently, only WAS interim fixes for deployments based on IBM Web Application Pattern are supported at the component level.

Use an application level iFix policy to install updates to the pattern type and plug-ins during deployment.

When an interim fix is added to a virtual application pattern, the system adds the URLs of selected interim fixes to the application model document appmodel.json as attributes:

"attributes": {
    "archive": "artifacts/simple2.war", 
    "ifixes": [
      "https://localhost:9443/services/ifixes/1/scriptArchive.zip",
      "https://localhost:9443/services/ifixes/2/scriptArchive.zip"
    ], 
    "ignoreFailedIfix": true,
        ...
}

During deployment, the application model is transformed to a topology document (topology.json) and the vm-template includes an IFIXES parameter that references the interim fix URLs. For example, the topology document includes:

"roles": [
    {
      ...
      "parms": {
         "ARCHIVE": "$$1",
         "RESTART_INTERVAL": 24,
         "KEYSTORES_PASSWORD": "<xor>BSdtBiUPOwwGbCVr",
         "IFIXES": [
             "https://localhost:9443/services/ifixes/1/scriptArchive.zip",
             "https://localhost:9443/services/ifixes/2/scriptArchive.zip"
         ],
         "USERID": "virtuser",
         "PASSWORD": "$$2",
         "ignoreFailedIfix": "true",
         "hugePageSize": 2048
      },
      ...
    }
]

The lifecycle script on the deployed virtual machine uses the information in the topology document to download the interim fixes onto the deployed virtual machine and then install applicable interim fixes.


Resource references

A resource reference is a logical name used to locate an external resource for an application. At deployment time, the references are bound to the physical location (global JNDI name) of the resource in the target environment. In the Virtual Application Builder, the system scans a virtual application pattern for all references in uploaded EAR, WAR, or OSGi files. The system then maps the resource references it finds during the scanning process to resource references defined in your plug-ins.

When you include referenceType and referenceKey in a property definition in metadata.json, you can obtain the values from the resource reference definition in the deployment descriptor of the EAR, WAR, or OSGi file.

For example, if the deployment descriptor in a data source includes this resource reference definition:

<resource-ref>
  <res-ref-name>jdbc/TradeDataSource</res-ref-name>
   <res-type>javax.sql.DataSource</res-type>
</resource-ref>
The metadata of the component WASDB2, has a property that is called resourceRefs, which maps to this definition:

{
  "id"           : "resourceRefs",
  "label"        : "DS_RESOURCE_REFERENCE_LABEL",
  "type"         : "array",
  "required"     :  false,
  "description"  : "DS_RESOURCE_REFERENCE_DESC",
  "referenceType": "javax.sql.DataSource",
  "referenceKey" : "res-ref-name"
}
The value of referenceType matches the value of the res-type element in the deployment descriptor and the value of referenceKey matches the res-ref-name element name in the deployment descriptor. As a result, the value jdbc/TradeDataSource in the deployment descriptor is considered a candidate value for the property resourceRefs.


Policies

A policy is a JSONObject defined in metadata.json. Policies apply to components.

There are two approaches to defining a policy.

Extended (type 1)

This implementation is the default. For an extended policy, the component, policy, and component transform are tightly coupled and are typically implemented and delivered in the same plug-in. TopologyProvider has the getPolicy(JSONObject component, String policyName) method for the component transform to access policies and their attributes. An extended policy is just an extra set of attributes that are added to a component. These attributes are processed by the transform for the component.

Linked policy (type 2)

In a linked policy approach, a policy defined in one plug-in is used to extend the capability of an existing component defined in another plug-in.

In both cases, the definition of the policy is the same in metadata.json. The sample policy that is provided in the Plug-in Development Kit has the following policy:

Figure 3. Policy example

"id" : "HLPolicy",
"type" : "policy",
"transformAs" : "linked",
"applicableTo" : [ "Hello" 
"attributes" : [
   {
   "id" : "lp1",
   "label" : "linked policy value1",
   "type" : "string",
   "required" : true,
   "sampleValue" : "sample linked policy value"
   }
] 

The transformAs setting is optional and can have a value of extended or linked. If transformAs is not specified, the policy is handled as an extended policy by default.

In Virtual Application Builder, the user experience is the same for both types of policies. However, the difference between the two policy types is in the implementation. From an implementation perspective, a linked policy becomes a component with attributes. A link is created from the policy component to the component that the policy modifies.

The selection of source and target components enables linked policies to be implemented easily with Velocity template transforms, and this capability distinguishes it from the extended policy implementation.

When the application model is processed:

  1. All components are transformed before all links.
  2. If the source component of link 1 is the target link of link 2, link 1 is transformed before link 2.


Linked policy implementation

The sample linked policy HLPolicy becomes a component (HLPolicy) with a link from the Hello component to the HLPolicy component. During deployment, the Hello component is transformed first, and all of its default policies are processed by the Hello component transformer. The Hello component depends on the HelloCenter component. The HelloCenter component is also transformed, either before or after the Hello component. The HLPolicy component is also transformed, and the order between these three components is indeterminate. Next, the hclink link from Hello to HelloCenter is processed. The link from Hello to HLPolicy is processed, but the order of processing of these links is indeterminate, since the Hello component is the source of each link.

The Hello component is deployed onto a virtual machine with a Hello role and the corresponding role lifecycle scripts. Those artifacts define the basic behavior. The sample policy modifies that basic behavior. The HLPolicy plug-in provides more lifecycle scripts. By modeling the policy as an extension dependency in the role, more behavior is added.

Without the policy, the workload agent runs role scripts in this order:

When the policy is added, more scripts are included:

Note: All scripts are optional, but this example shows what is possible.

To create this implementation, the following artifacts are required:

The policy transformer (HLPolicyTransform.java or hlpolicy{component,link}.vm velocity templates) adds both of these artifacts.

Note: There are two types of depends objects: role and extension. Both are similar in that they can inject scripts into the base role. The difference is that a role dependency represents a dependency on another role, so changes in that other role trigger the {role}/{dependency}/changed.py script to run. Extension dependencies do not have a changed.py script. Within a topology document, the difference is whether the depends object has a "role" attribute. The following excerpt from topology.json is generated from the sample pattern and shows both types of dependencies. The first depends object is a role dependency, the second is an extension dependency:

{
   "roles":[
      {
         "type":"Hello",
         "name":"hello",
         "parms":{
            "Hello_Sender":"Ann"
         },
         "depends":[
            {
               "role":"Hello_Center_Plugin-hcenter.HelloCenter",
               "parms":{
                  "HC_Receiver":"Bob",
                  "inst_id":1
               },
               "type":"HCenter"
            },
            {
               "parms":{
                  "lp1":"my linked policy value"
               },
               "type":"HLPOLICY"
            } 


Globalization

You can translate attributes for a plug-in by using messages.json files.

Create a locale folder with subfolders, in ISO 3166-1 alpha-2 format, that contain the messages.json file for each language that your plug-in supports.

For example, if metadata.json contains:

[
    {
        "id": "Hello",
        "label": "Hello_Label",
        "description": "Hello_Desc",
        "thumbnail": "appmodel/images/thumbnail/Hello.png",
        "image": "appmodel/images/Hello.png",
        "type": "component",
        "attributes": [
            {
                "id": "sender",
                "label": "Sender_Label",
                "description": "Sender_Desc",
                "required": true,
                "type": "string",
                "regExp": "\\w*",
                "invalidMessage": "Invalid_Sender_Msg" 
            }          
        ]
    }
]
and operation.json contains:

{
    "Hello": [
        {
            "id": "HelloMessage",
            "label": "Hello_Label",
            "description": "Inlet_Desc",
            "target": "Any",
            "aggregator" : "none",
            "script": "send.py HelloMessage",
            "attributes": [
                {
                    "label": "Username_Label",
                    "type": "string",
                    "id": "UserName",
          "displayType": "string",                    
                    "description": "Inlet_Desc" 
                }
            ] 
        }
      ]
}  
then the messages.json file in the plugin.com.ibm.sample.hello\plugin\appmodel\locales\en\ directory contains:

{
  "Hello_Label": "Hello Plugin",
  "Username_Label": "Username",
  "Inlet_Desc":"Deployment inlet to check for registered users",
  "Sender_Label":"Sender Name",
     "Sender_Desc":"Greeting message sender"
}

The attributes Hello_Label, Username_Label, Inlet_Desc, Sender_Label, and Sender_Desc are translated for the user when they access the plug-in through Virtual Application Builder.

Note: You cannot update messages.json through the Virtual Application Builder user interface. To update messages.json, you must:

  1. Export the virtual application.
  2. Modify the messages.json files on the local file system.
  3. Import the application.


Deployment

This topic describes various aspects of deployment and how to design for them in your plug-in.


Activation

Each image contains a startup script, /0config/0config.sh, that executes after the virtual machine starts.

Each virtual machine is assigned a unique name within the deployment. The name is set as an environment variable named SERVER_NAME. The value is formed by appending an instance number to the corresponding vm-template name, for example, application-was.11373380043317, application-was.2237183401478347. Activation proceeds as follows:

  1. Get the vm-template for this virtual machine from the topology document, for example, if SERVER_NAME == application-was.1, then get the vm-template named application-was.
  2. For each node part in the vm-template:

    1. Download the node part .tgz file and extract into the {nodepkgs_root} directory.
    2. Invoke{nodepkgs_root}/setup/setup.py, if the script exists. Associated parms from the topology document are available as maestro.parms.
    3. Delete {nodepkgs_root}/setup/.

  3. Run the installation scripts ({nodepkgs_root}/common/install/*.sh|.py) in ascending numerical order.
  4. Run the start scripts ({nodepkgs_root}/common/start/*.sh|.py) in ascending numerical order.

In Step 2, node parts must not rely on the order of installation; that is, the setup/setup.py script must rely on contents of that node part only. One exception is the maestro module. The module initialization script is in place, so that the setup.py script can use the maestro HTTP client utility methods, for example, maestro.download(), and maestro.parms, to obtain configuration parameters.

Both installation and start scripts are ordered. By convention, these scripts are named with a number prefix, such as 5_autoscaling.sh or 9_agent.sh. These scripts are said to be in slot 5 or slot 9. All installation scripts in slot 0 are run before any installation script in slot 1. All of the installation scripts are run in sequential order, and then all of the start scripts are run in sequential order. Set up and installation scripts are run one time for each virtual machine; start scripts are run on every start or restart. For more information, see the section, .Recovery: Reboot or replace. The workload agent is packaged and installed as a node part.


Node parts

Node parts are installed by the activation script and generally contain binary and script files to augment the operating system. Review the following information about node parts:


Parts

Parts are installed by the workload agent and generally contain binary and lifecycle scripts that are associated with roles and dependencies. Review the following information about parts:


Roles

A role represents a managed entity within a virtual application instance. Each role is described in a topology document by a JSON object, which is contained within a corresponding vm-template like the following example:

maestro.role['tmpdir']: role-specific working directory; not cleared (string; RO)

You can import custom scripts, for example, import my_role/my_lib.py:

utilpath = maestro.node['scriptdir'] + '/my_role' 
if not utilpath in sys.path:     
sys.path.append(utilpath) 
import my_lib

The following is an example role from a topology document:

"roles":[
{
"plugin":"was\/2.0.0.0",
'parms':{
"ARCHIVE":"$$1",
"USERID":"virtuser",
"PASSWORD":"$$6"
},
"depends":[
{
"role":"database-db2.DB2",
'parms':{
"db_provider":"DB2 Universal JDBC Driver Provider",
"jndiName":"TradeDataSource",
"inst_id":1,
"POOLTIMEOUT":"$$11",
"NONTRAN":false,
"db2jarInstallDir":"\/opt\/db2jar",
"db_type":"DB2",
"db_dsname":"db2ds1",
"resourceRefs":[
{
"moduleName":"tradelite.war",
"resRefName":"jdbc\/TradeDataSource"
}
],
"db_alias":"db21"
},
"type":"DB2",
"bindingType":"javax.sql.DataSource"
}
],

Role names and role types

Role names must be unique within the vm-template. In the topology, it identifies a specific section in the vm-template where role parameters are defined. Role names do not need to match the role type. The role name identifies the directory where the role scripts are stored. For example, if your vm-template includes these lines:

{
  name: A_name
  type: A   parms:{}
}
The following actions occur when the virtual machine is deployed:

The following example shows a deployment document that is generated for a caching shared service deployment. The fully qualified role name for each node is in the format

{server name}.{role name}
The {server name} is based on the vm-template name defined in the topology document. The {role name} is the role name defined in the topology document with a unique instance number appended to the name.

 ROLES: [
    {
      time_stamp: 1319543308833,
      state: "RUNNING",
      private_ip: "172.16.68.128",
      role_type: "CachingContainer",
      role_name: "Caching-Container.11319542242188.Caching",
      display_metrics: true,
      server_name: "Caching-Container.11319542242188",
      pattern_version: "2.0",
      pattern_type: "foundation",
      availability: "NORMAL"
    },
    {
      time_stamp: 1319543269980,
      state: "RUNNING",
      private_ip: "172.16.68.129",
      role_type: "CachingCatalog",
      role_name: "Caching-Catalog.21319542242178.Caching",
      display_metrics: false,
      server_name: "Caching-Catalog.21319542242178",
      pattern_version: "2.0",
      pattern_type: "foundation",
      availability: "NORMAL"
    },
    {
      time_stamp: 1319544107162,
      state: "RUNNING",
      private_ip: "172.16.68.131",
      role_type: "CachingMaster",
      role_name: "Caching-Master.11319542242139.Caching",
      display_metrics: true,
      server_name: "Caching-Master.11319542242139",
      pattern_version: "2.0",
      pattern_type: "foundation",
      availability: "NORMAL"
    },
    {
      time_stamp: 1319543249613,
      state: "RUNNING",
      private_ip: "172.16.68.130",
      role_type: "CachingCatalog",
      role_name: "Caching-Catalog.11319542242149.Caching",
      display_metrics: false,
      server_name: "Caching-Catalog.11319542242149",
      pattern_version: "2.0",
      pattern_type: "foundation",
      availability: "NORMAL"
    }
  ],

Role state and status

The agent implements a state machine that drives each role through a basic progression as follows:

  1. INITIAL: Roles start in the initial state. The {role}/install.py script for each role starts. For each dependency of the role, {role}/{dependency}/install.py starts, if it exists. If the scripts complete successfully, the role progresses automatically to the INSTALLED state. If the install.py script fails, the role moves to the ERROR state, and the deployment fails.
  2. INSTALLED: From this state, the {role}/configure.py script runs, if one exists. For each dependency of the role, {role}/{dependency}/configure.py starts, if it exists. If the scripts complete successfully, the role progresses automatically to the CONFIGURING state. If the configure.py script fails, the role moves to the ERROR state, and the deployment fails.
  3. CONFIGURING: From this state, the start.py script runs, if one exists. The role reacts to changes in the dependent roles ({role}/{dependency}/changed.py) and peers ({role}/changed.py), if they exist.

    Note: For more information about {role}/changed.py and {role}/{dependency}/changed.py, see the pydoc.

  4. STARTING: The automatic state setting stops. A lifecycle script must explicitly set the role state to RUNNING. Role status is set as follows:

    import maestro
    maestro.role_status = 'RUNNING'
    
  5. RUNNING

If the process is stopped or an unrecoverable error occurs, the role moves to an ERROR state. If an error is recoverable, you can keep the role state as RUNNING and change the role status to FAILED. For example, if WAS crashes, and the crash is detected by wasStatus.py, the wasStatus.py script sets maestro.role_status = "FAILED". When a user starts WAS from the Virtual Application Console console, one of the following processes occurs:

If the deployment is stopped or destroyed, the stop.py script runs, and the role moves to the TERMINATED state. Roles are only moved to the TERMINATED state by external commands.

The role status can change during transitions and within a state. The following table shows the state progression that is described in the previous part, with the details of status and lifecycle scripts started:


Role state and status

Role state script Transition Update status Aspect Set role status Invoke
Initial Initial => INSTALLED on entry
INITIAL .
INSTALLED

{role}/install.py then all {role}/{dep}/install.py

{role}/configure/py then all {role}/{dep}/configure.py

{role}/start.py

INSTALLED => RUNNING

during


INSTALLING

CONFIGURING

STARTING (role status by script)


RUNNING

{role}/start.py


  • on entry
  • on changed


role_status (set by script)


Existing resources

Plug-ins can interact with existing resources. Although the existing resource is not a managed entity within the plug-in, it is modeled as a role, which allows for a consistent approach, whether dealing with pattern-deployed or existing resources. Specifically:

An existing resource is modeled by a component in appmodel/metadata.json file. Typical component attributes are required to connect to the resource, such as hostname/IP address, port, and application credentials.

Integration with existing resources is modeled by a link in the appmodel/metadata.json file.

If a type of resource displays as pattern-deployed or existing, then consolidation is possible by adding a role to represent the external resource. This role can export parameters from the existing resource that the dependent role for the pattern deployed case can handle.

Consider the case of an application that is using an existing resource, such as wasdb2, imsdb, and wasctg plug-ins. At the application model level, the existing database is a component, and WAS uses it, on behalf of the application, as a represented link to that component. Typical attributes of the existing database are its host name or IP address and port, and the user ID and password for access.

In older service approaches, the existing database component has a transform that builds a JSON target fragment that stores the attributes, and the link transform uses these attributes. In IMS., for example, the link transform creates a dependency in the WAS role in the WAS node, with the parameters of the existing database that are passed from the component. The dependent role configure.py script is used to configure WAS to use the existing database that is based on the parameters, which are sufficient, but in the deployment panel the parameters of the existing database appear in the WAS role, which is not sensible.

In the new role approach, the target component creates a role JSON object and the link transform adds it to the WAS virtual machine template list of roles. The wasdb2 plug-in creates an xDB role to connect to existing DB2 and Informix databases. IMS can convert to this model, and move its configure.py and change.py scripts to a new xIMS role. The advantage here is in the deployment panel, which lists each role for a node separately in a left column where its parameters and operations are better separated for user access.

The wasdb2 plug-in provides an extra feature that IMS and CTG might not use. The plug-in also supports pattern-deployed DB2 instances. In the pattern-deployed scenario, the DB2 target node is a node that is started. The correct model is a dependent role and the link configuration occurs when both components, source WAS and target DB2, start. The changed.py script is then run. For the existing database scenario, the wasdb2 plug-in exports the same parameters as the DB2 plug-in, and then processing for pattern-deployed and existing cases can be performed in the changed.py script. IMS and wasctg do not require this process and can use a configure.py role script for new roles.


Repeatable tasks

At run time, a role might need to perform some actions repeatedly. For example, the logging service must back up local logs to the remote server in a fixed period. The plug-in framework allows a script that is started after a specified time to meet this requirement.

You can run a task from any lifecycle script that belongs to a role, such as install.py, configure.py, start.py, and changed.py. Call the maestro.tasks.append() method to run the task. For example:

task = {}
task['script']='backupLog.py' 
task['interval'] = 10
 
taskParms={}  
taskParms['hostname'] = hostname
taskParms['directory'] = directory taskParms['user'] = user taskParms['keyFile'] = keyFile
task['parms'] = taskParms
 
maestro.tasks.append(task)

When you are troubleshooting a task that does not run, check the script that calls the task with maestro.tasks.append() first.

You must have a dictionary object named task. You can change name this name to another valid name. The target script is specified by task['script'] and the interval is specified by task['interval']. You can add parameters to the script by using task['parms']. This addition to the script is optional. The maestro.tasks.append(task) is used to enable this task. In this sample, backupLog.py, which is in the folder {role}/scripts, is started after 10 seconds when the current script is completed. Using the backupLog.py script, you can retrieve the task parameters from maestro.task['parms'] and retrieve the internal from maestro.task['interval']. This script is only started one time. If the backupLog.py script is required to be started repeatedly, you must add the same codes into the backupLog.py script. When the current script is completed, it is started after the new specified internal and parameters.


Recovery: Reboot or replace?

If a virtual machine stops unexpectedly, the master agent recovers the failed virtual machine. The action depends on the virtual machine type. A persistent virtual machine is rebooted. Other virtual machines are replaced.

A virtual machine is persistent if it is an instance of a vm-template with a true-valued persistent property, as follows:

"vm-templates": [
        {
          "persistent":true,
            "scaling": {
                "min": 1, 
                "max": 1
            },

There are two ways for a plug-in to mark a virtual machine as persistent:

The direct method supersedes the indirect. That is, if the vm-template is marked persistent (true or false), that is the final value. If the vm-template is not marked persistent, the resolve phase of deployment derives a persistent value for the vm-template that is based on the packages that are associated with that vm-template. The vm-template is marked persistent or true if any package declares persistent true.

The indirect method provides more flexibility to integrate parts and node parts, without requiring global knowledge of where persistence is required. A transformer adds the property as follows:

 "vm-templates": [
        {
          "persistent":true,
            "scaling": {
                "min": 1, 
                "max": 1
            }, 
            "name": "Caching_Master", 
            "roles": [
                {
                    "depends": [{
                        "role": "Caching_Slave.Caching"
                    }],
                    "type": "CachingMaster", 
                    "name": "Caching",
                    'parms':{
                   "PASSWORD": "$XSAPassword"
                  }
                }
            ], 
            "packages": [
                "CACHING"
            ]
        },
Package configuration is specified in the config.json file as follows:

{
    "name" : "db2",
    "version" : "1.0.0.0",
    "packages" : {
        "DB2" : [{
            "requires" : {
                "arch" : "x86_64",
                "memory" : 0},
                "persistent" : true,
            "parts" : [
                {"part" : "parts/db2-9.7.0.3.tgz",
                 'parms' : {
                    "installDir" : "/opt/ibm/db2/V9.7"}},
                {"part" : "parts/db2.scripts.tgz"}]
        }]
    }
}


Virtual Application Console

You can manage virtual application instances from the Operation tab of the Virtual Application Console. The tab displays roles in a selected virtual application instance and each role provides actions that the underlying plug-ins defined for it, such as retrieving data, applying a software update, or modifying a configuration setting.

There are two types of actions you can perform to manage or modify deployed roles. The main difference between these actions is the scope.

Operation

For an operation, the action affects only current deployed roles. An operation is defined in operation.json.

Configuration

For a configuration, attribute changes are saved in the topology. If a virtual machine is restarted or more virtual machines are deployed, the configuration attribute value is applied to these virtual machines. A configuration is defined in tweak.json.

Operation

Configuration

In the deployment panel, a configuration update is handled as a special type of operation. The configuration processes include the following


Define plug-in variables

You can define name and value pairs in config.json file so that they can be reused as variables in several areas of the plug-in.

Name and value pairs are defined within a parms object.


Example

The wasoracle plug-in provides an example of parms reuse. A parms object in config.json defines two variables for the JDBC driver.

"WASOracle" : [
   {
      "parts" : [
         {
            "part"  : "parts/wasxoracle.scripts.tgz",
            "parms" : {
                 "jdbcDriver"     : "$JDBC_DRIVER",
                 "jdbcInstallDir" : "$JDBC_INSTALL_DIR"
            }
         }
      ]
   }
Later in the same file, the following parms object defines values for JDBC_DRIVER and JDBC_INSTALL_DIR

   "parms" : {
      "JDBC_DRIVER"      : null,
      "JDBC_INSTALL_DIR" : "/opt/oraclejdbcjar"
Because the JDBC_DRIVER value is null, the user must specify a JDBC file name before the plug-in can be enabled, and a warning icon is displayed next to the plug-in on the System Plug-ins page to indicate that the configuration of the plug-in is incomplete. The config_meta.json file for the wasoracle plug-in includes "id" : "parms.JDBC_DRIVER" to reference the JDBC_DRIVER name and value pair defined in config.json.

[
  {
   "id"          : "parms.JDBC_DRIVER",
   "path"        : "files",
   "label"       : "ORA_JDBC_DRIVER",
   "description" : "ORA_JDBC_DRIVER_DESC",
   "type"        : "file",
   "extensions"  : [ "jar" ]
   }
]


Develop lifecycle scripts

There are some guidelines for developing lifecycle scripts to consider as you develop your plug-ins.


Run lifecycle scripts on virtual machines

When a virtual application is deployed, the lifecycle scripts are copied to the deployed virtual machines. To debug and adjust the scripts, you must connect to the virtual machines by using SSH.

SSH must be configured on the virtual machine to work with.

For the purposes of troubleshooting plug-ins, you should also consider installing the debug and unlock plug-ins to help you debug more effectively.

  1. Determine the deployed virtual machine that you want run scripts on, and get its IP address. The example in these instructions uses 172.16.37.128. See:

  2. Log on to the virtual machine by using SSH. The following example uses the OpenSSH command with the IP address 172.16.37.128.

    ssh -i id_rsa virtuser@172.16.37.128
    
    For detailed instructions for making an SSH connection from the command line or by using PuTTY on Windows, see virtual application instance.

  3. Become the root user.

    sudo su -
    

  4. Set up the shell variables:

     .  /0config/nodepkgs/common/scripts/pdk-debug/setEnv.sh
    

    The environment variables are logged in /0config/0config.log at the top of the file. The environment variable $NODEDIR is the node working directory. The directory is /opt/IBM/maestro/agent/usr/servers/{vm-template-name}.{timestamp}. For example, for a IBM WAS node:

    /opt/IBM/maestro/agent/usr/servers/Web_Application-was.11312470007562
    

    The "vm-templates" element of the topology document contains "name":"Web_Application-was".

  5. Find the script to run by using reqDirs.sh and the request directory.

    reqDirs.sh
    

    The output is like this example:

    $NODEDIR = /opt/IBM/maestro/agent/usr/servers/Web_Application-wasce.11313595092512
    $NODEDIR/python/log_injector.py RequestDir: $NODEDIR/pyworkarea/requests/1444249909829418986
    $NODEDIR/python/log_injector.py RequestDir: $NODEDIR/partsInstall
    .
    $NODEDIR/scripts/WASCE/install.py RequestDir: $NODEDIR/pyworkarea/requests/2799038048538593654
    $NODEDIR/scripts/WASCE/configure.py RequestDir: $NODEDIR/pyworkarea/requests/9078005070867166367
    $NODEDIR/scripts/AGENT/start.py RequestDir: $NODEDIR/pyworkarea/requests/637746887665724204
    $NODEDIR/scripts/SSH/start.py RequestDir: $NODEDIR/pyworkarea/requests/7163718071124984320
    $NODEDIR/scripts/WASCE/start.py RequestDir: $NODEDIR/pyworkarea/requests/7130019476062423261
    
    Scripts are in $NODEDIR/scripts/{role}

    For example, to rerun the WASCE install.py script, find WASCE/install.py in the left column of the reqDirs.sh script output, and its request directory in the right column.

  6. Change the current directory to the request directory. For example:

    cd $NODEDIR/pyworkarea/requests/2799038048538593654
    

  7. Print formatted .json files by using dumJson.sh.

    dumpJson.sh out.json
    

    The out.json file is the output from the last time the script ran. The in.json file is the input to the script, containing input parameters from the topology document.

    Both in.json and out.json are formatted, so you can use the commands cat, edit, and less, or view the files to see them formatted.

  8. Run the script by using runScript.sh. For example:

    runScript.sh $NODEDIR/scripts/WASCE/install.py
    


Deployed node startup flow

  1. Run 0config.sh in /0config
  2. Download activator .zip files, and extract them from BOOTSTRAP_URL.
  3. Change to the /0config/start directory and run .sh scripts in numerical order. Scripts start with a number. These instructions use the example 5_exec_vm_tmpl.sh.
  4. The script 5_exec_vm_tmpl.sh calls /0config/exec_vm_tmpl/exec_vm_tmpl.py
  5. The exec_vm_tmpl.py script reads the topology.json file, and for each node part does the following tasks:

    1. Downloads the node part.
    2. Runs the setup.py script for the node part, if it exists. The setup.py script has any parameters from the topology document that is set in its environment, available from the maestro package.

  6. The 5_exec_vm_tmpl.sh script then calls node part installation scripts (.py or .sh) in numerical order from /0config/nodepkgs/common/install.

    To rerun this script:

    For a .sh script

    1. Set up the environment:

      . /0config/nodepkgs/common/scripts/pdk-debug/setEnv.sh
      cd /0config/nodepkgs/common/install
      
    2. Start the .sh script directly from the command line.

    For a .py script

    Run the script with:

    cd /0config/nodepkgs/common/install
    runScript.sh {script-name}
    
  7. The 5_exec_vm_tmpl.sh script then calls node part start scripts (.py or .sh) in sequential order from /0config/nodepkgs/common/start.

    To rerun this script:

    For a .sh script

    1. Set up the environment:

      . /0config/nodepkgs/common/scripts/pdk-debug/setEnv.sh
      cd /0config/nodepkgs/common/start
      
    2. Start the .sh script directly from the command line.

    For a .py script

    Run the script with:

    cd /0config/nodepkgs/common/start
    runScript.sh {script-name}
    
  8. The /0config/nodepkgs/common/start/9_agent.sh script starts last. This script starts the maestro agent code which downloads and installs parts, and runs the part lifecycle scripts.
  9. For each part, the following steps occur:

    1. Download the part .tgz file and extract into {tmpdir}.
    2. Run {tmpdir}/install.py, passing any associated parameters that are specified in the topology document.
    3. Delete {tmpdir} if the script is successful. The directory is not deleted if the script fails or if the virtual application is deployed with a debug component with Deployment for manual debugging configured.

  10. Each role in the vm-template runs concurrently. For each role:

    1. Run {role}/install.py, if it exists.
    2. For each dependency of the role, run {role}/{dependency}/install.py, if it exists.
    3. Run {role}/configure.py, if it exists.
    4. For each dependency of the role, run {role}/{dependency}/configure.py, if it exists.
    5. Run {role}/start.py, if it exists.

  11. React to changes in dependencies with {role}/{dependency}/changed.py and peers with {role}/changed.py, if they exist.


Application model and topology document examples

The application model and topology documents are core pieces of the PureApplication System modeling and deployment. This section presents examples of these related documents as a basis for the other sections in this guide. The sample Java. Enterprise Edition (Java EE) web application that is provided with the web application virtual application pattern type is used as an example.


Application model (appmodel.json)

The appmodel.json file represents the serialization of the model defined in the Virtual Application Builder user interface. Components (nodes) and links, along with user-specified property values, are represented.

{
   "model":{
      "name":"Sample",
      "nodes":[
         {
            "attributes":{
               "WAS_Version":"7.0",
               "archive":"artifacts/tradelite.ear",
               "clientInactivityTimeout":60,
               "asyncResponseTimeout":120,
               "propogatedOrBMTTranLifetimeTimeout":300,
               "totalTranLifetimeTimeout":120
            },
            "id":"application",
            "type":"EAR"
         },
         {
            "attributes":{
               "dbSQLFile":"artifacts/setup_db.sql"
            },
            "id":"database",
            "type":"DB2"
         }
      ],
      "links":[
         {
            "source":"application",
            "target":"database",
            "annotation":"",
            "attributes":{
               "connectionTimeout":180,
               "nontransactional":false,
               "minConnectionPool":1,
               "jndiDataSource":"jdbc/TradeDataSource",
               "XADataSource":false,
               "maxConnectionPool":50
            },
            "type":"WASDB2",
            "id":"WASDB2_1"
         }
      ]
   }
}


Topology document

The final topology document for an application model depends on the deployment environment, such as storehouse URL and image ID. This sample shows two vm-templates from the web application, application-was and database-db2. Each vm-template has a list of node parts and parts to be installed, and run time roles to be managed.

{
   "vm-templates":[
      {
         "parts":[
            {
               "part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/was\/parts\/was-7.0.0.11.tgz",
               'parms':{
                  "installDir":"\/opt"
               }
            },
            {
               "part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/was\/parts\/was.scripts.tgz"
            },
            {
               "part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/wasdb2\/parts\/db2.jdbc.tgz",
               'parms':{
                  "installDir":"\/opt\/db2jar"
               }
            },
            {
               "part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/wasdb2\/parts\/wasdb2.scripts.tgz"
            }
         ],
         "node-parts":[
            {
               'parms':{
                  "private":"127.0.0.1"
               },
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/firewall\/nodeparts\/firewall.tgz"
            },
            {
               'parms':{
                  "iaas-port":"8080",
                  "agent-dir":"\/opt\/IBM\/maestro\/agent",
                  "http-port":9999,
                  "iaas-ip":"127.0.0.1"
               },
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/agent\/nodeparts\/agent-linux-x64.tgz"
            },
            {
               'parms':{
                 "installerURL":"files\/itmosv6.2.2fp2_linuxx64.tar.gz",
                 "omnibustarget":"",
                 "temsip":"",
                 "omnibusip":""
               },
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/monitoring\/nodeparts\/monitoring.tgz"
            },
            {
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/deployinlet\/nodeparts\/deployinlet.tgz"
            },
            {
               'parms':{
                  "collectors":[
                     {
                        "url":"http:\/\/COLLECTOR_NODE_IP:8080"
                     }
                  ]
               },
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/logging\/nodeparts\/logging.tgz"
            },
            {
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/cloud.HSLT\/nodeparts\/iaas.tgz"
            },
            {
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/autoscaling\/nodeparts\/autoscaling.tgz"
            }
         ],
         "scaling":{
            "min":1,
            "max":1
         },
         "image":{
            "type":"medium",
            "image-id":"none",
            "activators":[
               "https:\/\/localhost:9444\/storehouse\/\/admin\/clouds\/mockec2.zip"
            ]
         },
         "name":"application-was",
         "roles":[
            {
               "depends":[
                  {
                     "role":"database-db2.DB2",
                     'parms':{
                        "MAXPOOLSIZE":"$$2",
                        "installDir":"\/opt\/db2jar",
                        "inst_id":1,
                        "POOLTIMEOUT":180,
                        "NONTRAN":false,
                        "DS_JNDI":"jdbc\/TradeDataSource",
                        "MINPOOLSIZE":"$$3"
                     },
                     "type":"DB2"
                  }
               ],
               'parms':{
                  "clientInactivityTimeout":"60",
                  "ARCHIVE":"$$1",
                  "propogatedOrBMTTranLifetimeTimeout":"300",
                  "asyncResponseTimeout":"120",
                  "USERID":"virtuser",
                  "totalTranLifetimeTimeout":"120",
                  "PASSWORD":"<xor>BW4SbzM9FhwuFgUxE2YyOW4="
               },
               "external-uri":"http:\/\/{SERVER}:9080\/",
               "type":"WAS",
               "name":"WAS",
               "requires":{
                  "memory":256
               }
            }
         ],
         "packages":[
            "WAS",
            "WASDB2"
         ]
      },
      {
         "parts":[
            {
               "part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/db2\/parts\/db2-9.7.0.1.tgz",
               'parms':{
                  "installDir":"\/opt\/ibm\/db2\/V9.7"
               }
            },
            {
               "part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/db2\/parts\/db2.scripts.tgz"
            }
         ],
         "node-parts":[
            {
               'parms':{
                  "private":"127.0.0.1"
               },
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/firewall\/nodeparts\/firewall.tgz"
            },
            {
               'parms':{
                  "iaas-port":"8080",
                  "agent-dir":"\/opt\/IBM\/maestro\/agent",
                  "http-port":9999,
                  "iaas-ip":"127.0.0.1"
               },
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/agent\/nodeparts\/agent-linux-x64.tgz"
            },
            {
               'parms':{
                  "installerURL":"files\/itmos-v6.2.2fp2_linuxx64.tar.gz",
                  "omnibustarget":"",
                  "temsip":"",
                  "omnibusip":""
               },
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/monitoring\/nodeparts\/monitoring.tgz"
            },
            {
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/deployinlet\/nodeparts\/deployinlet.tgz"
            },
            {
               'parms':{
                  "collectors":[
                     {
                        "url":"http:\/\/COLLECTOR_NODE_IP:8080"
                     }
                  ]
               },
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/logging\/nodeparts\/logging.tgz"
            },
            {
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/cloud.HSLT\/nodeparts\/iaas.tgz"
            },
            {
               "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/autoscaling\/nodeparts\/autoscaling.tgz"
            }
         ],
         "scaling":{
            "min":1,
            "max":1
         },
         "image":{
            "type":"large",
            "image-id":"none",
            "activators":[
               "https:\/\/localhost:9444\/storehouse\/\/admin\/clouds\/mockec2.zip"
            ]
         },
         "name":"database-db2",
         "roles":[
            {
               'parms':{
                  "DB_PORT":50000,
                  "DB_PATH":"\/home\/db2inst1",
                  "PASSWORD":"<xor>NRA0aWgHOGoRaG47DiU=",
                  "DB_NAME":"adb",
                  "SQL_URL":"https:\/\/localhost:9444\/storehouse\/user\/applications\/a-0d1ac0d4-4e4c-49d7-954f-d4884a6ad703\/artifacts\/setup_db.sql"
               },
               "external-uri":"jdbc:db2:\/\/{SERVER}:50000\/adb:user=db2inst1;password=jOk67Xg5N71dQz;",
               "type":"DB2",
               "name":"DB2"
            }
         ],
         "packages":[
            "DB2"
         ]
      }
   ]
}


Plug-in troubleshooting and monitoring

Plug-ins provide operations that enable users to troubleshoot and monitor their virtual application patterns. As a part of your troubleshooting, you can also connect directly to virtual machines to run and test lifecycle scripts.


Troubleshoot service for plug-ins

Plug-ins provide troubleshooting operations that generate information that is logged in the Logging Service and returned immediately to the user.

This document contains information about how the troubleshooting service provides a consistent framework and helper methods for plug-ins to create a simplified and pleasant user experience. The troubleshooting service is a guide to help map the recommended design for a plug-in to create troubleshooting operations. Each plug-in contains the list of troubleshooting operations for the specific role. You can access the troubleshooting operations though the user interface deployment inlet on the Operation tab. The troubleshooting service uses the deployment inlet operation capabilities with a recommended structure to provide consistency and reduce the work that is required by the plug-in to add troubleshooting operations. The troubleshooting operation is managed by python lifecycle scripts that the deployment inlet operation starts as the entry and exit point of the operation. The troubleshooting python lifecycle script can start other scripts that the plug-in needs, but it must be clear that the scripts were successful or not and provide the compressed file that is expected from the action. The troubleshooting python lifecycle script can take advantage of helper methods that are provided by the troubleshooting service plug-in to make common actions easier. Examples of the type of actions these methods can provide are mechanisms for returning the data to the user through the deployment inlet and interacting with the logging service to store historical troubleshooting information.


Log service for plug-ins

The logging service is a general service to collect multiple types of information. The information is securely transferred from the virtual machine and stored for review by a logging service implementation.

The information that is collected by the logging service is for administrative purposes and not for the application. The service can collect text and binary type file information. The file can be a single snapshot file that is never collected again or an infinitely growing file that can rotate to manage the size.

The logging service is a high-level service that supports zero to multiple registered logging service implementations. The registered implementations are the real processes that provide reports on the multiple types of information collected.

This general logging service presents a subset of the collected information in the Log Viewer page of the workload console and the Virtual Application Console deployment Log Viewer tab.

The Log Viewer displays only the information found on the virtual machine when requested. Historical information cannot be displayed when the information is removed, such as if the data is deleted, rolled over, or if the virtual machine disappears because it is terminated. The presentation of the historical information, even after the virtual machine is no longer available, remains available to the logging service implementation to extend the Log Viewer capabilities to access the extracted information such as the external storage system.

The logging service information is explained in more detailed in the following sections:


High-level design of the log service

The logging service is automatically included on all virtual machines in a virtual application instance. It is a generic framework for plug-ins to use, notifying the logging service implementation about the multiple types of information to collect. This design provides the flexibility for different logging service implementations to be registered on a single virtual machine. When an implementation notifies the logging service about the list of files to collect, specific information is required that helps all logging service implementations. The logging service uses a logtype, such as name, type, and single event pattern, and a list of specific files or directories that are pattern-based or any file that is created in a directory to monitor. The type and single event pattern helps the logging service understand details about the file, like how individual events are contained inside the file. The logging service provides a default list of logtypes that can be used and information about how to extend the list of logtypes for the custom event patterns that are described as follows.


Log service default list of logtypes

This section describes the default list of logtypes. Using these logtypes reduces the need for creating custom logtypes. These logtypes can be used by external resources to monitor the files or directories.


Logtypes. List of logtypes and details

Logtype name Logtype details
File "description":"Single file that is created for log entry"
BinaryFile "description":"Single binary file that is created for log entry"
SingleLine "description":"Each list is a new entry"
MultiLineTimeStamp "description":"Single or multi-line entry where brackets encase the entry"

date/time, [10/8/10 16:42:54:109 EDT], notes the start of a new entry", "start": "\\[\\d{1,2}/\\d{1,2}/\\d{2}.*\\d{1,2}:\\d{2}:\\d{2}:\\d{3}.*\\w{1,3}\\].*"

MultiLineIP "description":"Single or multi-line entry where IP address notes the start of a new entry", "start": "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}.*"


Custom logtypes

External resources can create a custom logtype to assist custom event patterns in monitoring the files and directories. This custom logtype is created by using a custom JSON file that is packaged with the external resource. The plug-in must notify the logging service about this custom logtype file. The basic metadata fields for a logtype entry are as follows:


Custom logtypes. List of custom logtypes and details

Metadata field ID Value Required
name Specifies a unique name. True
description Specifies a text description of the logtype False
format Specifies either binary or text. The default is text. False
start Specifies a pattern to determine where an event starts.

If no end tag is included, an event ends when the pattern is seen again.

False
end Specifies a pattern to determine when an event is complete.

This tag determines when an event is complete, therefore, other start patterns that are found are ignored until the end pattern is found.

False


Example

"logtype-config.json" file:
{"types":[
  {
  "name": "adaptorName2",
  "description":"This is a new adaptor",
  "format":"text"
"start": "\\[\\d{2}/\\w{3}/\\d{4}.*\\d{2}:\\d{2}:\\d{2}:\\d{3}.*\\-\\d{4}\\].*Start:.*",
"end": "\\[\\d{2}/\\w{3}/\\d{4}.*\\d{2}:\\d{2}:\\d{2}:\\d{3}.*\\-\\d{4}\\].*End:.*"
  }
]}


Log service methods

Plug-in Python lifecycle scripts interact with the logging service through the maestro.loggingUtil method call. The following list includes methods that the logging service utility exposes for plug-ins to call:


Plug-in interaction with the log service

A plug-in must notify the logging service with the list of directories and files to collect for the log viewer and logging service implementations.

The plug-in notifies the logging service when to start and stop monitoring the specific files. The plug-in must provide details about the files so the logging service knows the type of the file (binary or text), and the structure of a single event inside the file. The plug-in uses the logtype to describe the details of a file to the logging service so it can properly handle the events.

The plug-in uses the following methods inside the lifecycle scripts any time during the lifecycle execution:

The following example shows how a plug-in interacts with the logging service. The example illustrates where the default logtypes are used so that the logging service monitors both specific files and directories for historical purposes.

Inside the example plug-in, the start.py lifecycle script first creates a JSON object that contains a list of files and directories to monitor:

listjson = '{ "role": "'+maestro.role['name']+'", "types": [ { "logtype": "SingleLine", "type": "file", 
"name": "/opt/plugin/log/instance.log"}, {"logtype": "BinaryFile", "type": "dir", "name": "'/opt/plugin/log", 
"pattern": "*.errlog"}, {"logtype": "File", "type": "dir", "name": "/opt/plugin/log"}] }' 
Then, the example plug-in start.py lifecycle script notifies the logging service about the list of files and directories:

maestro.loggingUtil.monitor(listjson) 
The logging service monitors these specific files to display in the log viewer and allows logging service implementations that are configured to store these files for historical purposes.


Create a log service implementation

The logging service supports custom implementations such as log backup, logging collection and analysis service, and software monitoring like Splunk. These implementations act as the underlying process for a secure information transfer from the virtual machine and information storage for data review.

The logging service implementation is required to follow these steps:

  1. Create a plug-in that contains the logging service implementation that gets registered with the logging service.

    Because the logging service implementation can be used on virtual machines where the logging service is implemented, the implementation must be a pattern type plug-in which allows the logging service implementation to be properly installed and managed by the plug-in infrastructure.

  2. Use the plug-in lifecycle script calls to register with the logging service.

    The logging plug-in implementation uses its lifecycle scripts to register when it is ready to receive forwarded logging service method calls. The method to do this registration is maestro.loggingUtil.registerImplementation(ImplName, ImplScript).

    The implementation provides a name, for example, logbackup, so other plug-ins required to interact with the specific implementation know whether that implementation is active on the virtual machine. Also, this implementation helps the logging service know that the services are registered. The Python script that is provided contains the implementation of the core forwarding methods.

    When the implementation is deactivated and does not take any more forwarded calls from the logging service, the unregistered method is used. The method to do this unregistration is maestro.loggingUtil.unregisterImplementation(ImplName).

    This method again provides the official name of the implementation.

  3. Implement the core forwarding methods such as monitor, unmonitor, and registerPluginLogtype.

    Each logging service implementation is required to provide a Python script that implements the following methods. These methods are automatically called when the implementation is registered. Any local plug-in on the virtual machine can call these core methods. The following are the methods to be implemented:

    • monitor(jsonData)

      Provides the list of files and directories to be monitored with a logtype. The logtype defines the details about the binary or text file and what a single event structure looks like inside the file. If the service cares about the specific event structure, the logtype defines a generic pattern that indicates the start and end pattern of a single event for that specific file.

    • unmonitor(jsonData)

      Provides the list of files and directories to stop monitoring.

    • registerPluginLogtype(file)

      Provides a file that contains custom logtypes that are provided by a specific plug-in. This file explains unique event patterns for the specific plug-in role, for example:

      {"types":[   
      {     
          "name": "DB2instance",     
          "start": "------------------------------------------------------------.*",     
          "end": "------------------------------------------------------------.*"   
      },   
      {     
          "name": "DB2StandardLog",     
          "start": "\\d{4}\\-\\d{2}\\-\\d{2}\\-\\d{1,2}\\.\\d{1,2}\\.\\d{1,2}\\.\\d{1,6}.*"   
          }  
      ]}
      

    The implementation sets up these calls. For example, the logbackup plug-in creates a registry of files and file patterns required to regularly back up the logs to an external system.


Monitor service for plug-ins

Plug-ins provide monitoring operations that collect and display deployment metrics for resource use and performance at the virtual machine, middleware, and application level.

If you are developing your own plug-ins for IBM PureApplication System W1500, you can configure and register collectors for plug-in specific metrics at runtime and apply metadata to define the presentation of the monitoring metrics in the Virtual Application Console deployment panel.


Collector

PureApplication System monitoring provides specific and built-in collectors, and generic and typed collectors. These collectors are based on an open, loosely coupled, and collector-oriented framework. All collectors are implemented from the interface com.ibm.maestro.monitor.ICollectorService, which includes the following methods:

// Creates the collector based on the given configuration values.
// @param config 
// @return uniqueId for this collector instance or null if the collector could not be created String create(JSONObject config); 
 
// Returns the metadata for the available metrics from the collector.
// @param uniqueId of the collector instance to query
// @return {"categories":    [{"categoryType":"<ROLE_CATEGORY>"}],
updateInterval":  "<secs>":,"<ROLE_CATEGORY>": {<see
IMetricService.getServerMetadat{}>}}
JSONObject getMetadata(String uniqueId);      
 
// @param uniqueId of the collector instance to query 
// @param metricType not used in this release and defaults to "all" 
// @return {"<ROLE_CATEGORY>":[{"<METRIC_NAME>":"<METRIC_VALUE>"}, ...},
JSONObject getMetrics(String uniqueId);  
 
// @param uniqueId of the collector instance to shutdown void delete(String uniqueId);  
void delete(String uniqueId);

PureApplication System monitoring has the following types of collectors:


Monitor collector types.. Description of the monitoring collector types available for plug-ins.

Name Type Usage
com.ibm.maestro.monitor.collector.script Script Collector for plug-ins that can supply metrics with shell scripts.
com.ibm.maestro.monitor.collector.http HTTP Collector for plug-ins that can supply metrics by HTTP request.

Monitor also implements several collectors for itself that collect operating system metrics from the Monitoring Agent for IBM PureApplication System and the hypervisor relevant to processor, memory, disk, and networking in virtual machines. These collectors are provided in all deployments and can be used by other components or plug-ins without needing to register a separate collector.


Registration

To use PureApplication System monitoring collectors, you must register the collectors with the plug-in configuration, providing the node, role, metrics, and collector facilities information.

PureApplication System provides a Python interface to register the collectors. The definition of the interface is as follows:

maestro.monitorAgent.register(.{
      "version":  Number,
      "node":  String,
      "role":  String,
      "collector":  String,
      "config":  JSONObject 
}.)
The single parameter, maestro.monitorAgent.register is a JSON string, where:


Monitor collector types.. Description of the configuration properties for monitoring collector types.

Collector config.properties
com.ibm.maestro.monitor.collector.script

{
   "metafile"   :   "<meta-data file>",
   "executable"  :   "<executable script>",
   "arguments" :  "<script arguments>",
   "validRC" :   "<valid return>",
   "workdir" :   "<work dir >",
   "timeout" :   "< time out duration>"
} 
com.ibm.maestro.monitor.collector.http

{
   "metafile" :   "<meta-data file>",
   "url" :  "<URL>",
   "query" :  "<query arguments>",
   "timeout" :  "<time out duration>",
   "retry_delay" :  "<delay time to next retry>"
   "retry_times" :  "<retry_times>"
   "datahandler" :  "<utility jar properties >"
}

The following code example illustrates the registering script used in the script collector:

maestro.monitorAgent.register(.{
  "node":"${maestro.node}",
  "role":"${maestro.role}",
  "collector":"com.ibm.maestro.monitor.collector.script",
  "config":{..})

The registering scripts are typically put into appropriate scripts or directories of the plug-in lifecycle to ensure that the plug-in is ready to collect metrics. For example, for the WAS collector, the registering script is placed under the installApp_post_handlers directory where all scripts are started after WAS is running.

Registration with a different type of collector must provide a corresponding configuration. The values for the config.properties file are as follows:


Configuration for the script collector

Property Required? Value
metafile Yes The full path string of the metadata file that contains the JSON object.
executable Yes The full path string of a shell script that provides plug-in metrics output.
arguments No Arguments for the script. The value can be a single string with arguments that are separated by spaces, or it can be an array of strings. Provide the arguments as an array of strings if an individual argument contains a space.
validRC No A code string for a valid HTTP return. The default value is 0. The value can be an integer or a string that converts into an integer.
workdir No The full path of the working directory for the script. The default value is "java.io-tmpdir".
timeout No The amount of time to wait for the script to run, in seconds. The default value is 5. The value can be a number or a string that converts into a number.

To obtain the full path of the metadata file or script, the registration script can prepare the config.properties by referring to maestro variables, which keep the path and directory information of plug-in installation.


Configuration for the HTTP collector

Property Required? Value
metafile Yes The full path string of the metadata file that contains the JSON object.
url Yes The URL string of the requesting plug-in metrics.
query No The arguments string of the query in the HTTP request.
validRC No A code string for a valid HTTP return. The default value is 200. The value can be an integer or a string that converts into an integer.
timeout No The amount of time to wait for the script to run, in seconds. The default value is 5. The value can be a number or a string that converts into a number.
retry_delay No The time interval in seconds that occurs between calling failure and the next retry. The value can be a number or a string that converts into a number.
retry_times No The total number of retry times before entering a delay period. The value can be an integer or a string that converts into an integer.
datahandler No JSON object, including properties of the utility JAR package, or transforming HTTP response to metrics.


Metadata file

The metadata file is referred to in collector registering.

The plug-in provides a JSON formatted file that includes collector metadata parameters, metric category types that it wants to expose and metadata that describes each exposed metric. The content of the metadata file contains:

The format of the metadata file is as follows:

{
"Version" : <metadata file version>,
"update_interval": <interval  time in seconds to            
poll for updated data>,
"Category": [
            <array of category names to register (1..n)>
            ],
"Metadata": [
            {
            "<category name from Category[]>":{
                "metrics": [
                  { 
                   "attribute_name": <attribute from collector to associate to this metric>
                   "metricsName": <metric name to expose through monitoring agent APIs>
                   "metricType": <metric value data type including "RANGE",
                                                                          "COUNTER",
                                                                          "PERCENT",
                                                                          "STRING", 
                                                                          "AVAILABILITY",
                                                                          STATUS>
                  } ,
                 .. ..
                    ]
                  }
                },
              .. ..
             ]
}


Metric format

The data input by plug-ins into a collector must follow a specific format so that it can be parsed and transferred by the monitoring agent into metrics. For example, a plug-in that uses the script collector must ensure that the script output is formatted, and a plug-in in the HTTP collector must ensure that the HTTP response or the data handler output is formatted. The metric format is in JSON:

{
"version": <version number>,
    "category": [
        <category name>,
     .. ..
        
    ],
    "content": [
        {
            <category name> :{
                <metric name>: <metric value>,
                   .. ..           . .
            }
        },
       .. ..
    ]
}

The following example shows metrics that are formatted correctly for the collector:

{
"version": 2,
    "category": [
        "WAS_JVMRuntime",
        "WAS_TransactionManager",
        "WAS_JDBCConnectionPools",
        "WAS_WebApplications"
    ],
    "content": [
        {
            "WAS_JVMRuntime": {
                "jvm_heap_used": 86.28658,
                "used_memory": 176576,
                "heap_size": 204639
            }
        },
        {
            "WAS_TransactionManager": {
                "rolledback_count": 0,                 "active_count": 0,                 "committed_count": 0
            }
        },
        {
            "WAS_JDBCConnectionPools": {
                "max_percent_used": 0,                 "min_percent_used": 0,                 "percent_used": 0,                 "wait_time": 0,                 "min_wait_time": 0,                 "max_wait_time": 0
            }
        },
        {
            "WAS_WebApplications": {
                "max_service_time": 210662,
                "min_service_time": 0,                 "service_time": 8924,
                "request_count": 30
            }
        }
    ]
}


Error handling

There are two types of errors that can occur while collectors call scripts for a script collector or a data handler for the HTTP collector to get formatted metrics:

A collector handles collector-level errors directly, but it can handle only errors at the script or data handler level when the errors are returned by the scripts or data handlers. For the script collector, scripts should communicate with plug-ins, gather the metrics and output formatted metrics. Scripts can handle errors as either expected or unexpected while communicating and formatting, and then expose the errors to the collector. For the HTTP collector, data handlers transform data in an HTTP response from plug-ins into formatted metrics. Data handlers can handle transformation errors and then expose them to the collector. To communicate errors to the collectors, scripts or data handlers must wrap them.

{
"FFDC": <error message >
}

When a collector gets an FFDC object from scripts or data handlers, it logs them in log and trace files for troubleshooting. It also propagates them to the monitoring agent, which then clears corresponding records from the monitoring cache so that monitoring API does not return old metrics any longer. As a result, the user interface does not display these error messages for plug-ins in which FFDC objects are being collected.

For collector-level errors that are raised when a collector runs scripts, sends HTTP requests, or invokes data transformers, the collector wraps errors in FFDC objects and then logs and propagates them the same way as FFDC objects from scripts and data handlers.

Script collector error messages include:

The collector has trouble in calling scripts registered by plug-in for outputting metrics, for example, script files are missing
The collector gets error return code (RC) when executing script files The collector gets nothing and empty string from return of executing script files The collector fails to parse metrics from return of executing script files due to unexpected ending or incorrect JSON format The collector gets error message from return of executing script files The collector executing script files is timeout 

HTTP collector error messages include:

The collector waiting for  HTTP response is timeout The collector gets nothing from HTTP response The collector get error status code from HTTP response, such as 4xx for client error, 5xx for server error The collector gets error from user transformer instead of metrics The collector fails to parse metrics from user transformer  due to incorrect JSON format 
The HTTP collector might catch errors before it invokes the data handler and as a result, no data is available to the data handler for further processing. In this situation, the collector passes in a null object, which makes the data handler aware that there is no data input. The data handler can determine how to generate the final data for the collector. For example, data handlers can either create FFDC objects with a message such as "No data collected" for null object input, or create plug-ins metrics with their own specific values for null, such as "UNKNOWN" for availability.


User interface presentation

Plug-in metrics are displayed on the Middleware Monitoring tab of the Virtual Application Console. Plug-ins provide metadata to describe the metric and category for displaying the metrics, and define the format for displaying metrics.

The monitoring_ui.json file is located under the plugin directory of a plug-in project, for example, plugin.com.ibm.was/plugin/monitoring_ui.json. Other JSON files are also in this directory, including config.json and config.meta.json.

Note: Middleware Monitoring does not display for invisible roles. A role is invisible if dashboard.visible is set to false for a role in the topology model. By default the value is set to true.

 "role" : {
        "name"  : "$roleName",
        "type"  : "$roleName",
        "dashboard.visible" : false,

The metadata is defined in monitoring_ui.json. Two versions of this file are supported:

Figure 1. monitoring_ui.json for a single role

[
{
        "version": 1,
        "category": < category name from Category[] defined in 
                      metric metadata>,
        "label": <the content showed on chart for the category>,
        "displays": [
            {
                "label": <string showed on chart element for the 
                            metric>,
                "monitorType": <time and type properties of metric to 
                                 Display>,
                "chartType": <chart type for displaying the metric>,
                "metrics": [
                    {
                        "attributeName": <metric name defined in 
                                            metric medata>,
                        "label": <string showed on chart element 
                                    for the metric>,
                    } 
                ] 
            } 
        ] 
},
.. ..
]
It is assumed that monitoring_ui.json serves a single role that has the same name as the plug-in. It should be used with plug-ins that contain only a single role and no cross referencing to other plug-ins.

To support multiple roles within a plug-in, Version 2 has an extra array-type attribute displayRoles, which can associate one metric category with one or more roles.

Figure 2. Version 2 of monitoring_ui.json for multiple roles

[
{
        "version": 2,
        "displayRoles": [<role name>,.]
        "category": < category name from Category[] defined in 
                      metric metadata>,
        "label": <the content showed on chart for the category>,
        "displays": [
            {
                "label": <string showed on chart element for the 
                            metric>,
                "monitorType": <time and type properties of metric to 
                                 Display>,
                "chartType": <chart type for displaying the metric>,
                "metrics": [
                    {
                        "attributeName": <metric name defined in 
                                            metric medata>,
                        "label": <string showed on chart element 
                                    for the metric>,
                    } 
                ] 
            } 
        ] 
},
.. ..
]

For both versions of the monitoring_ui.json file, displays define attributes for the appearance of the metric in the user interface. All metrics in one category are displayed the same way and share one chart. The attribute monitorType and chartType should be used together to define what the metrics look like. For example, if monitorType is set to HistoricalNumber and chartType is set to Lines for a category of metrics at the same time, the metric is displayed as a line graph with time in the X axis and metric values in the Y axis.


Monitor types. Monitor types that are available to define metric data in plug-ins

Monitor types (monitorType) Description
HistoricalNumber Metric data in simple number for historical timeline
HistoricalPercentage Metric data in percentage for historical timeline
RealtimeNumber Metric data in simple number for current temporality
RealtimePercentage Metric data in percentage for current temporality


Chart types. Chart types that are available to define metric data in plug-ins.

Chart types (chartType) Presentation
Lines Line chart
StackedAreas Stacked line chart (area chart)
StackedColumns Column chart

As an alternative for HistoricalNumber and chartType, you can use chartWidgetName to define appearance. The following example shows the use of chartWidgetName.

[
{
    ..    ..
        "category": "DATABASE_DRILLDOWN_HEALTH",
        "label": "Database Health Indicator",
        "displays": [
            {
                "label": " Database Health Indicator ",
                "chartWidgetName": "paas.widgets.HealthStatusTrend",                
                "metrics": [
                    {
                        "attributeName": "data_server_status",
                        "label": " Data_Server_Status " 
                    },
                    {
                        "attributeName": "io",
                        "label": "I/O" 
                    },
                    {
                        "attributeName": "locking",
                        "label": " Locking " 
                    },                                         
                    {
                        "attributeName": "logging",
                        "label": "Logging " 
                    },
                    {
                        "attributeName": "memory",
                        "label": "Memory" 
                    },      
                    {
                        "attributeName": "recovery",
                        "label": "Recovery" 
                    }, 
                    {
                        "attributeName": "sorting",
                        "label": "Sorting" 
                    }, 
                    {
                        "attributeName": "storage",
                        "label": "Storage" 
                    },                                                                            
                    {
                        "attributeName": "workload",
                        "label": "Workload" 
                    } 
                ]
            }                                     
        ] 
    }
]
Metrics with this configuration display as a two column list with metric labels in the first column and a colored square indicator icon to show the status.


Availability

After a plug-in is deployed, its role has a special status that indicates its overall health, called availability. It has the following status values.

An icon is associated with each value.

To provide health status for role, a plug-in can bind one of its metrics to the availability so that the monitoring service can show the status and update indicator icons based on the current value of the metric. Set metric_type to AVAILABILITY in the plug-in metadata file to make this association.

..  ..
"metadata":[
      {
            "DATABASE_AVAILABILITY":{
                "metrics":[{
                        "attribute_name":"database_availability",
                        "metric_name":"database_availability",
                        "metric_type":"AVAILABILITY"
                    }
                ]
            }
        },    
..  ..

The metric associated with availability can belong to any category; but a plug-in can have only one metric for binding. If multiple bindings are defined, only the first one is effective and the rest are ignored. The metric for binding must be a String type and can accept only the supported values at run time: "NORMAL", "WARNING", "CRITICAL", and "UNKNOWN".

For a plug-in that binds metrics to availability, the "UNKNOWN" status should be set when the plug-in cannot retrieve metrics for availability. The status can be archived in collector scripts (if it is using the script collector) or data handlers (if it is using the HTTP collector).

If a plug-in does not bind a metric to availability, the monitoring service applies the following algorithm to generate availability for the plug-in role:


Troubleshoot monitoring collectors


Problem: I registered the collector for roles in my plug-in, but the roles are not listed in Middleware Monitoring View.


Resolution: There are several possible causes:


Problem: I can see my roles that are listed in Monitoring View, but I cannot see their metrics. The message "CWZMO0040W: No real-time metric data is found for deployment" is displayed.


Resolution: The error message displays when the monitoring service cannot find metrics for a certain role anymore. Check the log and trace files to verify that the collector is working properly.


Auto scaling

The elastic scaling, or auto scaling, feature in a plug-in uses monitoring. Auto scaling provides the automatic addition or removal of virtual application and shared services instances based on workload.

You can optionally turn on the auto scaling feature by attaching the scaling policy to a target application or shared service. The policy is also used to deliver the scaling requirements to the back-end engine. Requirements include trigger event, trigger time, and instance number, which drive the scaling procedure.


Scaling policy overview

The auto scaling policy can be attached to two kinds of components in PureApplication System: a virtual application and a shared service. For the virtual application, you can explicitly add the scaling policy to one or more components of the application in the Virtual Application Builder. For the shared service, the scaling policy must be described in the application model that is made by the plug-in developer if the service asks for the auto scaling capability.

Plug-ins, either for virtual applications or shared services, define the scaling policy, describe the policy in the application model, and provide transformers to explain and add scaling attributes into the topology document when the policy is deployed with plug-ins. The application build automatically generates the segment of the scaling policy in the application model only if you are using shared services. At run time, the back-end auto scaling engine first loads the scaling attributes and generates the rule set for scaling trigger. Then, the back-end engine computes on the rule set and decides whether the workload reaches a threshold for adding or removing application or shared service instances. The final step of the process is to complete the request.


Policy elements

The auto scaling policy is composed of elements for different scaling aspects as follows:

To apply the auto scaling policy to a plug-in, ensure that the scaling policy is defined in the application model that the plug-in is associated with, which collects user-specific requirement for the scaling capability. Also, ensure that the policy is transformed into the topology document, which guides the back-end engine to inspect the trigger event and take scaling actions.


Application model

Auto scaling capability is embodied as a policy in the application model. The application model is used to describe the components, policies, and links in the virtual applications or shared services. For virtual applications, the model can be visually displayed and edited with the Virtual Application Builder.

Virtual application designers can customize components and policies, including the auto scaling policy, in the Virtual Application Builder. There is no tool to visualize shared services in the application model. Auto scaling can be customized only in the Virtual Application Console when the service is deployed. The scaling policy that is described in the application model, for either a virtual application or shared service, follows the application model specification. The policy is defined in the node with a group of attributes.

The three auto scaling elements, trigger event, trigger time and instance number range, are described in the attribute set. There is no name convention for the attribute keys, but they must be understood by the plug-in to transfer into a topology document. The following code is an example of the elements that are described in the plug-in:

"model": {
   "nodes": [
           {
         ..
             },
             {
             "id": <policy id>
             "type":<policy type>
             "attributes": {
    
                      <No.1 metric id for trigger event>: [
                      < threshold for scale-in >,
                      < threshold for scale-out >
                       ],
                      <No.1 metric for trigger event>: [
                      < threshold for scale-in >,
                      < threshold for scale-out >
                       ],
                      <.. :[.. ,.. ]>
                      <No.n metric for trigger event>: [
                      < threshold for scale-in >,
                      < threshold for scale-out >
                        ],
                      <trigger time id>: <trigger time value>
                      <instance range number id": [
                         <min number>,
                         <max number>
                        ],
          }
      },
      {
        ......
      }
    ]
}

The attributes describe the scaling policy in an application model. From the example JSON segment, the Trigger Event can include multiple metrics and thresholds for one scaling policy, which means that the scaling operations on a plug-in can be triggered by different condition entries with different metrics. The relationship among these entries is explicitly explained by plug-in transformer and marked in the topology document. It is not required to mark them in application model, except that their label can be used to define the relationship in the user interface. PureApplication System requires the metadata be provided in a plug-in to explain components in the application model for user interface presentation. For scaling policy, the plug-in can apply correct widget types and data types to the attributes for Trigger Event, Trigger Time, and Instance Number Scope.


Topology model

In the topology document, the scaling is extended to contain the attributes from auto scaling. The neat scaling contains only attributes of "min" and "max", both of which typically have the same value. The value indicates the size of a fixed cluster on the plug-in template.

"vm-templates": [
    {
      ..
      scaling :{
                 "min": <number,
                 "max": <number>,
                  }
    },
    {
      ..
    }
]

When more attributes such as "triggerEvents" and "triggerTime" are included in the scaling, it evolves to an auto scaling capacity on the cluster. The value of "min" and "max" should not be the same any longer: "min" for lower limit of Instance Number Scope and "max" for the upper limit. The attributes for auto scaling are shown in the following JSON code example.

"vm-templates": [
{
  ..
  scaling :{
     "role" : <role type for the template>,
     "triggerEvents": [
         {
            "metric": <metric category and item linked by ".">,
            "scaleOutThreshold": {
                "value": <metric value with its data type>,
                "type": "CONSTANT",
                "relation": <comparison symbol including "<", 
                                                         ">", 
                                                         "<=",
                                                         ">=" >
                             },
            "conjunction": <conjunction type with other trigger 
                           events including "OR", "AND">
            "scaleInThreshold": {
                "value": <number>,
                "type": "CONSTANT",
                "relation": <comparison symbol>
                         }
          },
         
        {
            "metric": " metric category and item",
            "scaleOutThreshold": {
                "value":<number>,
                "type": "CONSTANT",
                "relation": <comparison symbol>
            },
            "conjunction": <conjunction type with other trigger 
                           events>
            "scaleInThreshold": {
                "value": <number>,
                "type": "CONSTANT",
                "relation": <comparison symbol>,
                "electMetricTimeliness": <"historical"|"instant">
            }
            
        },
    {
       
      {
      ..
        }
    ],
    "min": <number>,
    "max": <number>,
    "maxcpucount": <number>,
    "minmemory": <number>,
    "cpucountUpIncrement": <number>,
    "memoryUpIncrement": <number>,
    "triggerTime": <number>,
   }
   .. 
 },
 {
  ..
 }
]

PureApplication System supports multiple trigger events for a scaling operation. Those events are currently aggregated in two modes: OR and AND. The OR mode means that the scaling operation is triggered if only one event happens. The AND operation means if all events happen at the same time, only then is the scaling operation triggered. Auto scaling depends on monitoring to collect metrics for inspecting. To ensure that the right metrics are collected, the value of key "metric" in each trigger event must be consistent with the "category" and "attributeName". These attributes are defined in the plug-in metadata for monitoring collectors. The values can be joined by "." into "metric". For example, "CPU.Used" represents the metric with a category of "CPU" and an attributeName of "Used". Monitoring also provides a group of OS level metrics, which can also be selected by plug-in developers and used for auto-scaling.

Some attributes are specific to a particular scaling type. For example, min and max are used with horizontal scaling. The following table lists attributes and their associated scaling type.


Attributes

Type Key Description
Horizontal scaling min The minimum number of virtual machines that a role can have
max The maximum number of virtual machines that a role can have
scaleInThreshold Metric and its threshold for scale-in action
scaleOutThreshold Metric and its threshold for scale-out action
Vertical scaling maxcpucount The maximum number of cores that a virtual machine can have
scaleUpCPUThreshold Metric and its threshold for scale-in action
cpucountUpIncrement Core count to increase by in one scale-up CPU action
minmemory The minimum memory size for a virtual machine
maxmemory The maximum memory size for a virtual machine
scaleUpMemoryThreshold Metric and its threshold for scale-up memory action
memoryUpIncrement Memory size to increase by in one scale-up memory action

The triggerTime attribute is shared by several scaling types and trigger events. It can either be placed inside a triggerEvent object or out a triggerEvent object. Its placement determines its scope. If triggerTime is placed inside the triggerEvent object, it only applies to its triggerEvent object. If triggerTime is placed outside the triggerEvent object, it applies globally for all triggerEvent objects. If there is a triggerTime attribute both inside and outside a triggerEvent object, the triggerTime that is inside the object takes precedence.

The transformer that is provided by the plug-in must define attributes of the scaling policy in the application model and map them to the named attributes in the topology document. The Trigger Event, Trigger Time, and Instance Number Scope autoscaling elements correspond to "triggerEvents", "triggerTime", and "min" and " max".

The scale-in action is more complicated than scale-out. For scale-in, there is a selection of a candidate role instance. The selection logic of scale-in follows some predefined rules, which try to make scale-in as reasonable as possible. The logic takes the instance as the scale-in candidate in priority order from Rule 1 to Rule 4.

  1. The terminated or failed virtual machine, except for the master.
  2. The virtual machine that has a status other than RUNNING, such as LAUNCHING, INITIALIZING, STARTING, except for the master.
  3. The virtual machine that the role with the least or greatest value of specified metric among cluster, except for the master.
  4. A random virtual machine, except for the master.

This selection logic ensures that unworkable virtual machines are removed first, and a random one is removed last. For Rule 3, users can define a group of scaling attributes in the topology for customization of, for example, metric type, value feature, and timeliness. For convenience, the customization is reusing some of described attributes for auto-scaling. Here is a scaling template example:

{
    "role" : "WAS",
    "triggerEvents": [
        {
           "metric": "CPU.Used",
            "scaleOutThreshold": {
                "value": 80,
                "type": "CONSTANT",
                "relation": ">="
            },
            "conjunction": "OR",
            "scaleInThreshold": {
                "value": 20,
                "type": "CONSTANT",
                "relation": "<",
                "electMetricTimeliness" : "historical"
            }
        }
    ],
    "min": 1,
    "max": 10,
    "triggerTime": 120
}

The values for "metric", "scaleInThreshold", "relation", and "electMetricTimeliness" are used to guide how to select a WAS instance for scale-in if plug-in provides manual scaling operations. In this example, "metric" specifies that processor utilization is the metric. The "<" for "relation" specifies that the candidate instance for scale-in is the one with lowest processor utilization in the cluster. A value of ">" would indicate the greatest processor utilization instead. For "electMetricTimeliness", the value can be "historical" or "instance". The "historical" value specifies that the scale-in instance is selected on an average of historical value in 5 minutes.


Manual scaling

Manual scaling provides virtual application administrators with a flexible and controllable way to add or remove instances of virtual applications or share services. By using "autoscalingAgent.scale_out" and "autoscalingAgent.scale_in", manual scaling can run in an autoscaling-safe way. Customization of some manual scaling features is supported, typically focusing on scale-in, using the manual scaling policy. When a plug-in exposes manual scaling operations, it transforms the policy to predefined attributes of the topology, which are used by scaling back-end to archive customized features.

In the topology document, manual scaling is uses the same attributes as auto scaling.

"scaling": {
                   "role": "RTEMS"
                   "triggerEvents": [{
                          "metric": "RTEMS.ConnectionNumber ",
                          "scaleOutThreshold": {  ..   },
                          "conjunction": "OR",
                          "scaleInThreshold": {
                                    "value": 20,
                                    "type": "CONSTANT",
                                    "relation": "<",
                              "electMetricTimeliness" : "instant"
            }
 
             }
                       ],
                   "triggerTime": 120,
                   "min": 1,
                   "max": 10,
                   "manual": {
                             "scaleInMetric":"RTEMS.ConnectionNumber",
                             "metricType" : "instant",
                             "rule": "minimum"
                             }
                  }
This example shows a deployment with both an auto scaling and a manual scaling policy on Remote Tivoli Enterprise Management Server (RTEMS). The scale-in is triggered automatically by using "triggerEvents" and "triggerTime" and can also be applied manually by users. For manual scale-in, the RTEMS instance with the lowest value of "connectionNumber" among all instances is selected as the one to destroy every time.

Use the following guidelines when you develop auto scaling and manual scaling policy and operations for plug-ins:


Scaling Interface

The autoscalingAgent utility defines a generic API for Python scripts to interact with the auto scaling agent on the virtual machine. The utility provides several functions:

Manual scale-out request for the specified role and template. The parameter is a JSON-like string. The "vmTemplate" and "roleType" values indicate the specified role and template for applying "scale-out". Calling the function fails if:

maestro.autoscalingAgent.scale_out(.{
             "vmTemplate":  String, 
             "roleType":  String
         }.) 

The following examples show usage of the function:

maestro.autoscalingAgent.scale_out('
 {"vmTemplate":"Web_Application-was", 
     "roleType":"WAS"}
')

Manual scale-out request for the specified role and template. The parameter is a JSON-like string. The values for "vmTemplate" and "roleType" indicate the specified role and template for applying "scale-in". The "node" is an optional attribute, which indicates the name of node to be removed. If no "node" is in the parameter, scale-in follows the predefined rules and manual scaling policy to select a node for removal. Calling this function fails if:

maestro.autoscalingAgent.scale_in(.{
             "vmTemplate":  String, 
             "roleType":  String,
             ["node" : String]
         }.)

Pause all scaling tasks that are running for deployment. Calling this function fails if:

maestro.autoscalingAgent.pause_autoscaling()

Resume all scaling tasks that running for the deployment. Calling this function fails if:

maestro.autoscalingAgent.resume_autoscaling()

Enable the function of autoscaling agent. Calling this function fails if:

maestro.autoscalingAgent.enable_autoscaling()

Disable the function of autoscaling agent. Calling this function fails if:

maestro.autoscalingAgent.disable_autoscaling()

Note: Although there are other ways to launch or destroy virtual machines, such as the kernel services APIs and a virtual machine action, they are not all safe for auto scaling, which means they can conflict or intervene with auto scaling tasks that are running in deployment. For example, auto scaling tasks are suspended when a scaling operation is triggered, and they do not resume until the deployment restores to steady state, meaning that a newly deployed virtual machine is running or a destroyed virtual machine disappears. This approach ensures that auto scaling can correctly use the metrics from running roles and virtual machines, ignoring the roles and virtual machines that are not working. Kernel services APIs and virtual machine actions do not notify auto scaling when they trigger launching or destroying actions. So auto scaling uses data from new virtual machines that are still launching or destroyed virtual machines that are being removed. The additional data can lead to unnecessary new scaling actions. The interfaces of autoscalingAgent.scale_out and autoscalingAgent.scale_in are safe for auto scaling. Using them helps to avoid these kinds of issues and other possible intervening and conflicting problems between auto scaling and manual scaling.


Metrics collected by Monitoring Agent for IBM PureApplication System

Monitor Agent for IBM PureApplication System collects metrics at the operating system level for monitoring virtual machines in virtual application instances.

This section describes the collected metrics. In the table, note that:


Metrics for the Linux agent

Category Name Type Origin (Column) Origin (Table) Reference for auto scaling
CPU busy_cpu PERCENT BUSYCPU KLZCPU CPU.Used
MEMORY memory_total RANGE MEMTOT KLZVM MEMORY.memory_total

memory_free RANGE MEMFREE KLZVM MEMORY.memory_free

memory_used RANGE MEMUSED KLZVM MEMORY.memory_used

memory_used_percent PERCENT MEMUSEDPCT KLZVM MEMORY.memory_used_percent

memory_cache RANGE MEMFREEPCT KLZVM MEMORY.memory_cache
DISKIO_0 bytes_reads_per_second RANGE RDBYTESEC KLZIOEXT DISKIO_0.blocks_reads_per_second

bytes_written_per_second RANGE WRBYTESEC KLZIOEXT DISKIO_0.blocks_written_per_second
DISKIO_1 bytes_reads_per_second RANGE RDBYTESEC KLZIOEXT DISKIO_1.blocks_reads_per_second

bytes_written_per_second RANGE WRBYTESEC KLZIOEXT DISKIO_1.blocks_written_per_second
DISK_0 disk_used_percent PERCENT DSKUSEDPCT KLZDISK DISK_0.disk_used_percent

mount_point STRING MOUNTPT KLZDISK Not applicable
DISK_1 disk_used_percent PERCENT WRBYTESEC KLZDISK DISK_1.disk_used_percent

mount_point STRING DSKUSEDPCT KLZDISK Not applicable
NETWORK megabytes_received_per_sec RANGE RECBPS KLZNET NETWORK.megabytes_received_per_sec
NETWORK megabytes_transmitted_per_sec RANGE TRANSBPS KLZNET NETWORK.megabytes_transmitted_per_sec


Monitor collector examples

These examples show how metrics are collected in the plugin.com.ibm.was plug-in.


Script collector

The plugin.com.ibm.was plug-in can use the script collector to run its shell scripts to collect the metrics from wsadmin. The following files show use of the script collector.

monitoring_register_collector.py

This file is the collector registration script that is in the plugin/parts/was.scripts/WAS/installApp_post_handlers subdirectory. It creates the registration data and configuration for the script collector and implements registration of the collector by using maestro.monitorAgent.register. The metadata file and executable script file are placed in predefined directories in the parts subdirectory and are copied into corresponding directories when the plug-in is installed on a virtual machine. In this example, the paths of the metadata file and executable script are concatenated from those maestro variable values and relative path of the predefined directories. The following code that was extracted from monitoring_register_collector.py shows how the config in the registration is built.

node = maestro.node
role = maestro.role
.
scriptdir = node['scriptdir']
node_name = node['name']
role_name = node['name'] + '.' + role['name']
monitorscripts = scriptdir + '/WAS/monitor_scripts/'
metafile = monitorscripts + '/metadata/WASModuleMetrics.metadata'
executable = monitorscripts + '/wsadmin_controller.sh'
registration = '{ "node":"' + node_name + '","role":"' + role_name 
               + '","collector" : "com.ibm.maestro.monitor.collector.script","config":                      
               {"metafile":"' + metafile + '","executable":"' 
               + executable + '","arguments": "","validRC": "0","workdir":"' 
               + monitorscripts + '","timeout": "10"}}'

The registration data and config looks like the following JSON object when this script runs:

{
    "node": "Web_Application-was.11310621328463",
    "role": "Web_Application-was.11310621328463.WAS",
    "collector": "com.ibm.maestro.monitor.collector.script",
    "config": {
        "metafile": "/opt/IBM/maestro/agent/usr/servers
                     /Web_Application-was.11310621328463/monitor
                     /wascollector/WASModuleMetrics.metadata",
        "executable": "/opt/IBM/maestro/agent/usr/servers
                       /Web_Application-was.11310621328463
                       /monitor/wascollector/wsadmin_controller.sh",
        "arguments": "",
        "validRC": "0",
        "workdir": "/opt/IBM/maestro/agent/usr/servers
                    /Web_Application-was.11310621328463/monior/wascollector",
        "timeout": "10" 
    }
}
The script wsadmin_controller.sh calls other scripts that include run_wsadmin.sh and collector_agent.jy, which starts wsadmin, formats data from wsadmin into a local file and reads the data from the file, and then prints metrics. The implementation of all these scripts depends mainly on wsadmin usage.

WASModuleMetrics.metadata

This metadata file is in the plugin/parts/was.scripts/WAS/monitoring_scripts subdirectory. It defines metric categories and data types.

{
    "Version" : 1,
    "Category": [
        "WAS_WebApplications",
        "WAS_JVMRuntime",
        "WAS_TransactionManager",
        "WAS_JDBCConnectionPools"
    ],
    "Metadata": [
        {
            "WAS_WebApplications" : {
                "updateInterval": 15,
                "metrics": [
                    {
                        "attributeName": "MaxServiceTime",
                        "metricName": "MaxServiceTime",
                        "metricType": "RANGE"
                    } ,
                    {
                        "attributeName": "ServiceTime",
                        "metricName": "ServiceTime",
                        "metricType": "RANGE"
                    } ,
                    {
                        "attributeName": "RequestCount",
                        "metricName": "RequestCount",
                        "metricType": "COUNTER"
                    } ,
                    {
                        "attributeName": "MinServiceTime",
                        "metricName": "MinServiceTime",
                        "metricType": "RANGE"
                    }
                ]
            }
        },
.. ..

monitoring_ui.json

This presentation metadata file is in the plugin/parts/templates subdirectory. It defines how metrics are displayed.

[
    {
        "category": "WAS_WebApplications",
        "label": "WAS_WebApplications_Request_Count",
        "displays": [
            {
                "label": "",
                "monitorType": "HistoricalNumber",
                "chartType": "Lines",
                "metrics": [
                    {
                        "attributeName": "RequestCount",
                        "label": "Request_Count" 
                    } 
                ] 
            } 
        ] 
    },
    {
        "category": "WAS_WebApplications",
        "label": "WAS_WebApplications_Service_Time",
        "displays": [
            {
                "label": "",
                "monitorType": "HistoricalNumber",
                "chartType": "Lines",
                "metrics": [
                    {
                        "attributeName": "MaxServiceTime",
                        "label": "Max_Service_Time" 
                    },
                    {
                        "attributeName": "MinServiceTime",
                        "label": "Min_Service_Time" 
                    },
                    {
                        "attributeName": "ServiceTime",
                        "label": "Avg_Service_Time" 
                    } 
                ] 
            } 
        ] 
},
.. ..


HTTP collector

The plugin.com.ibm.was plug-in can also use HTTP collector to send a request to the PerfServletApp application running on WAS and get metrics by processing the HTTP response. PerfServletApp is a default web application that is installed with WAS. It provides a way to work with the Performance Monitoring Infrastructure (PMI) to retrieve original performance statistics. If the application is not included with the WAS installation, it must be included as a part of plugin.com.ibm.was plug-in and it must be installed before you register and use the HTTP collector. The following files from the plug-in show the use of the HTTP collector.

monitoring_register_http_collector.py

This script functions like monitoring_register_collector.py described in the script collector example, except for making config for an HTTP collector type. It is stored in plugin/parts/was.scripts/WAS/installApp_post_handlers. The PerfServletApp application returns performance statistics from PMI in XML format, which is not compatible with the JSON format used by the monitoring collectors. A utility .jar called WASResponse.jar is used to transform the XML data to the required JSON format. This file is placed under parts/was.scripts/WAS/monitoring_scripts with other scripts and metadata files. The registration includes the datahandler properties in the config. The following example shows how the config in registration is built.

node = maestro.node
role = maestro.role
.
scriptdir = node['scriptdir']
node_name = node['name']
role_name = node['name'] + '.' + role['name']
monitorscripts = scriptdir + '/WAS/monitor_scripts/'
metafile = monitorscripts + '/metadata/WASModuleMetrics.metadata'
url = 'http://localhost:9080/wasPerfTool/servlet/perfservlet'
jardir = scriptdir + '/WAS/monitor_scripts/handler/'
registration = '{ "node":"' + node_name + '","role":"' + role_name + '",
               "collector" : "com.ibm.maestro.monitor.collector.http",
               "config":{"metafile":"' + metafile + '","url":"' + url + '",
                   "datahandler": {"jardir":" '+ jardir + '",
                           "class" :"com.ibm.maestro.was.perf.formatter.Transformer",
                           "method":"PMIStatToIWDMetrics" }}}'
The registration data and config looks like the following JSON object when this script runs:

{
    "node": "Web_Application-was.11310621328463",
    "role": "Web_Application-was.11310621328463.WAS",
    "collector": "com.ibm.maestro.monitor.collector.http",
    "config": {
        "metafile": "/opt/IBM/maestro/../WASPerfMetadata.json",
        "url": "http://localhost:9080/wasPerfTool/servlet/perfservlet",
        "query": "",
        "datahandler": {
            "jardir": "resources/jars",
            "class": " com.ibm.maestro.was.perf.formatter.Transformer ",
            "method": " PMIStatToIWDMetrics"
        }
    }
}
In this example, datahandler indicates the directory that contains WASResponse.jar in jardir and the com.ibm.maestro.was.perf.formatter.Transformer is the name of the class in the utility .jar which is exposing the public method named PMIStatToIWDMetrics.

monitoring_install_perservlet.py

To install the PerfServletApp application, the script monitoring_install_perservlet.py is included in the parts/was.scripts/WAS/before_start_handlers subdirectory which is called by default right before WAS is started and after its installation is finished. At execution time, wsadmin is already available to use to install and start PerfServletApp. It uses the maestro interface to install the servlet application. The script also provides PerfServletApp information to wsadmin for finding the servlet package. The following example is an extract of the script.

cmd = [installDir + '/IBM/WebSphere/AppServer/bin/wsadmin.sh', '-conntype', 'None', '-lang', 'jython', ]
archive_file = scriptdir + '/WAS/monitor_scripts/PerfServletApp.ear'
archive_name = 'PerfServletApp.ear'
args = [archive_file, archive_name]
cmd.extend(['-f', scriptdir + '/WAS/was_install_app.py'])
cmd.extend(args)
 
rc = maestro.trace_call(logger, cmd)
maestro.check_status(rc, 'Installing WAS performance servlet')
The metadata files WASModuleMetrics.metadata and monitoring_ui.json in the script collector example can also be used for the HTTP collector.


Scaling policy examples

Metrics and thresholds are used to manage scaling.


WAS scaling policy

There are several types of WAS scaling, which are based on specific thresholds and metrics.

Static

Scaling is based on a number of instances.

CPU Based

Scaling is based on a processor usage threshold range, an instance number threshold range, and a minimum time to trigger an add or remove.

Memory based

Scaling is based on a memory usage threshold range, a minimum time to trigger an add or remove.

Response Time Based

Scaling is based on a web response time threshold range and a minimum time to trigger an add or remove.

Web to DB

Scaling is based on

  • A web response time threshold range, JDBC wait time range, or JDBC pool usage threshold range (in percent)
  • An instance number range
  • A minimum time to trigger an add or remove.

In the appmodel/metadata.json file in the WAS plug-in, the widget "group" is used to present scaling types. The following JSON snippet shows the usage of "group" for scaling policy metadata, where four types are grouped into a "category" of SCALE_POLICY_TYPE.

"groups": [      
          {
                "category" : "SCALE_POLICY_TYPE",
                "id" : "None",
                "attributes": [
                "intialInstanceNumber"
                ],      
            } ,
            {
                "category" : "SCALE_POLICY_TYPE",
                "id" : "Basic",
                "attributes": [
                "CPU.Used.Basic" ,
               "scaleInstanceRange.Basic" ,
                "triggerTime.Basic " 
                ],
            } ,
            {
                "category" : "SCALE_POLICY_TYPE",
                "id" : "WebIntensive",
                "attributes": [
                "WAS_WebApplications.MaxServiceTime.WebIntensive",
                "scaleInstanceRange.WebIntensive ",
                "triggerTime.WebIntensive " 
                ],  
            } ,
            {
                "category" : "SCALE_POLICY_TYPE",
                "id" : "WebToDB",
                "attributes": [
                    "WAS_WebApplications.MaxServiceTime.WebToDB " ,
                    "WAS_JDBCConnectionPools.WaitTime.WebToDB ",
                    "WAS_JDBCConnectionPools.PercentUsed.WebToDB " ,
                    "scaleInstanceRange.WebToDB ",
                    "triggerTime.WebToDB " 
                ],
            }
          ..

The scaling policy that is attached on the WAS component with the WebToDB policy type selected would look like following example in the application model.

{
    "attributes": {
        "WAS_WebApplications.MaxServiceTime.WebToDB ": [
            1000,
            5000
        ],
        "WAS_JDBCConnectionPools.WaitTime.WebToDB ": [
            1000,
            5000
        ],
        "WAS_JDBCConnectionPools.PercentUsed.WebToDB ": [
            20,
            80
        ],
        "scaleInstanceRange.WebToDB ": [
            1,
            10
        ],
        "triggerTime.WebToDB ": 120
    },
    "id": "ScalingPolicyofWAS",
    "type": "ScalingPolicyofWAS",
    "groups": {
        "None": false,
        "Basic": false,
        "WebIntensive": false,
        "WebToDB": true
    }
}

The WAS transformer generates the following JSON snippet in the topology document from the WebToDB type of scaling policy.

{
    "role" : "WAS",
    "triggerEvents": [
        {
            "metric": "WAS_WebApplications.MaxServiceTime",
            "scaleOutThreshold": {
                "value": 5000,
                "type": "CONSTANT",
                "relation": ">="
            },
            "conjection": "OR",
            "scaleInThreshold": {
                "value": 1000,
                "type": "CONSTANT",
                "relation": "<="
            }
        },
        {
            "metric": "WAS_JDBCConnectionPools.WaitTime",
            "scaleOutThreshold": {
                "value":5000,
                "type": "CONSTANT",
                "relation": ">="
            },
            "conjection": "OR",
            "scaleInThreshold": {
                "value": 1000,
                "type": "CONSTANT",
                "relation": "<="
            }
        },
        {
            "metric": "WAS_JDBCConnectionPools.PercentUsed",
            "scaleOutThreshold": {
                "value": 80,
                "type": "CONSTANT",
                "relation": ">="
            },
            "conjection": "OR",
            "scaleInThreshold": {
                "value": 20,
                "type": "CONSTANT",
                "relation": "<="
            }
        }
    ],
    "min": 1,
    "max": 10,
    "triggerTime": 120,
}


Caching Service Scaling Policy

This example shows how to develop a scaling policy for shared services. Shared services have a predefined application model in the plug-in, so developers must explicitly provide metadata.json and appmodel.json, which both include the attributes of the scaling policy.

The metadata for the caching service is shown in the following code sample. The scaling policy portion of the code starts with the line "id": "ScalingPolicyofCaching".

[
    {
        "id": "CACHE",
        "type": "component",
        "attributes": [
            {
                "id": "cachingVMs",
                "required": true,
                "type": "number",
                "min": 1,
                "max": 5
            }
        ]
    },
    {
        "id": "ScalingPolicyofCaching",
        "type": "policy",
    "applicableTo": [
            "CACHE"
        ],
        "attributes": [
            {
                "id": "CapacityUsed",
                "label": "Capacity used",
                "type": "range",
                "displayType": "percentage",
                "required": false,
                "max": 100,
                "min": 1,
            },
            {
                "id": "scaleInstanceRange",
                "type": "range",
                "min": 1,
                "max": 50,
                "required": true,
            },
            {
                "id": "triggerTime",
                "type": "number",
                "max": 1800,
                "min": 30,
                "required": true,
            }
        ]
    }
]

The application model of the caching service is shown in the following code sample. The scaling policy portion of the code starts with the line "attributes": {.

{
    "model": {
        "nodes": [
            {
                "attributes": {
                    "cachingVMs": 1
                },
                "type": "CACHE",
                "id": "sharedservice"
            },
            {
                "attributes": {     
                    "scaleInstanceRange1": [
                        1,
                        10
                    ],
                    "triggerTime1": 120,
                    "CapacityUsed": [
                        20,
                        80
                    ]
                },
                "type": "ScalingPolicyofWAS",
                "id": "Scaling Policy"
            }
        ],
        "version": "1.0",
        "app_type": "service",
        "links": [],
        "patterntype": "foundation",
        "name": "CachingService",
        "description": "Caching Service"
    }
}


Pattern type packaging reference

Several virtual application pattern types are shipped with IBM PureApplication System W1500. The pattern types are a collection of plug-ins. The plug-ins contain the components, policies, and links of the virtual application pattern. This topic explains the packaging of the plug-ins.


Pattern type files

Plug-in files that are associated with a pattern type are as follows:

patterntypes/{ptype}.tgz

The built pattern type file, where {ptype} is the pattern type name.

plugins/{plugin}.tgz

One or more built plug-in files, where {plugin} is the plug-in name.

files/{name}

One or more files used by your plug-in, such as software to install on deployed virtual machines.

The {ptype}.tgz file is required and must contain the patterntype.json file. The {ptype}.tgz file might also contain the license and localized messages. For example, the patterntype.json file for the IBM Web Application Pattern is as follows:

{
  "name":"NAME",
  "shortname":"webapp",
  "version":"2.0.0.0",
  "description":"DESCRIPTION",
  "prereqs":{
   "foundation":"*"
  },
  "license":{
   "pid":"5725D57",
   "type":"PVU"
  }
}

A pattern type defines a logical collection of plug-ins, but not the members. The members (plug-ins) define their associations with pattern types in the config.json file. Therefore, pattern types are dynamic collections and can be extended by third parties. For example, the config.json file for the DB2 plug-in is as follows:

{
    "name":"db2",   
    "version":"2.0.0.0",   
    "files":[      
      "db2/db2_wse_en-9.7.0.3a-linuxx64-20110330.tgz",   
        "optim/dsadm223_iwd_20110420_1600_win.zip",   
        "optim/dsdev221_iwd_20110421_1200_win.zip",   
        "optim/com.ibm.optim.database.administrator.pek_2.2.jar",      
      "optim/com.ibm.optim.development.studio.pek_2.2.jar"   
    ],   
    "patterntypes":{
      "primary":{
        "dbaas":"1.0"
      },      
      "secondary":[
        {
          "webapp":"2.0"
        }
      ]
    },
    "packages":{
      "DB2":[
        {
            "persistent":true,            
            "requires":{               
              "arch":"x86_64"            
            },            
            "parts":[
              {                  
                "part":"parts/db2-9.7.0.3.tgz",                  
                "parms":{                     
                  "installDir":"/opt/ibm/db2/V9.7"
                }               
              },
              {
                  "part":"parts/db2.scripts.tgz"
               }
            ]
         }
      ]
   }
}


Packaging a pattern type

To understand the options for packaging a pattern type, you need to understand how files are stored. Files, such as software images, are stored in the Storehouse. After they are in the storehouse, they can be referenced by scripts for installation. This central storage location enables reuse of files. For example, if you create a My Pattern 1.0 pattern type and include the file samples/sample_file.tgz in one of the plug-ins, the file is saved in the Storehouse. If you update functions in the plug-in but do not need to update the samples/sample_file.tgz, you can package an updated My Pattern 1.1 pattern type with the new plug-in and exclude samples/sample_file.tgz from the file directory structure during packaging.

When you package files in a plug-in, you must reference the files in config.json. A top-level files element is a JSON array of relative file names. At build time, these files are retrieved from a file server.

The storage.url property is used to provide the file server URL. For example,

-Dstorage.url=http://fileserver.example.com/managed/plugins/

If the file is stored locally, you can use storage.dir instead. For example:

-Dstorage.dir=/packages/plugins

When the plug-in is installed, files are saved in the Storehouse. For example:

 https://172.16.65.88/storehouse/admin/files/db2/db2_wse_en-9.7.0.3a-linuxx64-20110330.tgz
You can then view the files with Storehouse Browser, from the user interface.

A plug-in part/install.py script can download files from the Storehouse by using the following lines:

installerUrl = urlparse.urljoin(maestro.filesurl, "db2/db2_wse_en-9.7.0.3a-linuxx64-20110330.tgz")
maestro.downloadx(installerUrl, installDir)

maestro.filesurl
Provides the root Storehouse URL. For the previous Storehouse URL example, the root URL is

https://172.16.65.88/storehouse/admin/files/
maestro.downloadx
The function downloads a .zip, .tgz or .tar.gz file that is given by installerUrl and extracts it into the installDir directory.

You can package files as a part of the pattern type packaging build or you can package files within individual plug-in builds. The builds include a -Dstorage.url parameter to retrieve files from a file server. The -Dstorage.url parameter uses the Ant get task to download the files.

If a -Dstorage option is not specified, the files are put in your plugin archive or patterntype archive in the correct relative path under the files directory, but with a .placeholder suffix added to the file name. Each file contains only the relative path name.

Note: You can run Ant from Eclipse or the command line.

To package files into your built pattern type:

  1. Go to the plugins.depends directory in your plug-in PDK workspace and run Ant.

    ant_path/ant -f build.xml
    
    This command builds all the plug-ins in the workspace, and puts the resulting .tgz files in the plugin.depends/image/plugins directory.
  2. Go to the root of the pattern type project, and type the following command:

    ant_path/ant -f build.patterntype.xml -Dstorage.url=file_server_url
    

    Or for a local directory

    ant_path/ant -f build.patterntype.xml -Dstorage.dir=local_path
    

    For example,

    ant_path/ant -f build.patterntype.xml -Dstorage.url=http://fileserver.example.com/managed/plugins/
    

    The patterntype.tgz file is saved in the export subdirectory of the project and includes retrieved files in the files subdirectory of the archive.

To package files in an individual plug-in, build the plug-in from the plug-in directory with the Virtual Application Extension Archive format.

ant_path/ant -f build.plugin.xml -Dstorage.url=file_server_url publishVAEA

Or for a local directory

ant_path/ant -f build.plugin.xml -Dstorage.dir=local_path

For example,

ant_path/ant -f build.plugin.xml -Dstorage.url=http://fileserver.example.com/managed/plugins/ publishVAEA

The plugin-vaea.tgz file is saved in the export directory, and includes retrieved files in the files subdirectory of the archive.


Plug-in validation

When you build or import a plug-in, a validation check runs to verify the contents of the plug-in.

During the validation process, the validator identifies issues and generates errors or warnings based on the type of issue. If an error is found, the build or import operation fails. The following sections describe which files are validated and what validation rules are applied.


Plug-in validation


Validation criteria for plug-ins

Object Validated item Validation criteria Result if validation fails
JSON files *.json The file must contain well-formed JSON syntax Error
appmodel/locales/messages.json
If the file exists, the file must contain well-formed JSON syntax Warning
config.json name*

  • Must be a non-empty string
  • Cannot contain a forward slash (/)

Error

patterntypes*

The file must contain a patterntypes element. At a minimum, a primary pattern type or a secondary pattern type must be defined.

Examples of valid elements.

"patterntypes" : {"primary" : {"webapp":"2.0"} }
 
"patterntypes" : {"secondary" : [ {"*" : "*"} ]   }
 
"patterntypes" : {"primary" : {"dbaas":"1.1"},
                  "secondary ": {"webapp":"*"} }
Error


For primary pattern types:

  • The pattern type must be in a JSON object
  • The version must be a string and match the regular expression

    [\d+(\.\d+)]
    
    The format of the version number N.N. For example 1.2.

A plug-in must have only one primary pattern type.

For secondary pattern types:

  • The pattern types must be in a JSON object or a JSON array.
  • The version must be a string and match the regular expression

    [(\d+(\.\d+))|\*]
    
    The format of the version number is N.N. For example 1.2. To indicate any version, * is supported.

Error

version* The plug-in version must be a non-empty string and match the regular expression [\d+(\.\d+){3}]. The format of the version number must be N.N.N.N. For example, a valid version number is 1.0.0.3. Error

"parts","node-parts"

  • At build-time, the corresponding directory of each "part" and "node-part" declared in config.json must exist in the /plugin directory.
  • During import, the corresponding .tgz package file of each "part" and "node-part" declared in config.json must exist in the /plugin directory


Build: Error Import: Warning

config_meta.json
If the file exists, it is a JSON array of JSON objects, each of which is a valid attribute. The id should start with "parms" Warning
metadata.json attributes

  • If the file exists, it must be a valid JSON array of JSON objects. Each JSON object can have an "attributes" attribute, which must be a JSON array of JSON objects, or null for no attributes. Each of these JSON objects must represent a valid attribute.
  • For all attributes, sampleValue must be consistent with the value of the type (string, number, boolean)
  • For the attributes with the type range:

    • min must be less than sampleValue
    • sampleValue must be less than max


Build: Error Import: Warning

Python files *.py All .py files in parts and nodeparts directories cannot contain syntax errors. Build: Error
/parts directory
If a plug-in has a parts directory, it cannot be empty. Warning

Each part Each ${PART}.tgz or directory under /parts must include an install.py part installation script. Warning
Transformer .jar file META-INF/ MANIFEST.MF Each bundle must contain a MANIFEST.MF Error


MANIFEST.MF must declare Service-Component Warning


The .xml referenced in Service-Component in MANIFEST.MF must exist.


Build: Warning Import: Error



MANIFEST.MF must reference all .xml files under the directory OSGI-INF/ Warning

OSGI-INF/*.xml The bundle must provide a valid interface:

"com.ibm.maestro.model.transform.TopologyProvider",
 
"com.ibm.maestro.plugin.services.PluginProvider",
 
"com.ibm.maestro.model.transform.TopologyProcessor",
 
"com.ibm.maestro.iaas.ServiceProvisioner",
 
"com.ibm.maestro.iaas.RegistryProvider",
 
"com.ibm.maestro.iaas.PostProvisioner
Warning


Each service component that is mapped to a component in metadata.json must provide the interface:

"com.ibm.maestro.model.transform.TopologyProvider"
Error
operation.json
If the file exists, it must be a valid JSON object. Each element of the JSON object must have a String key (which is a role name) and a value which is a JSON array of JSON objects, each of which presents an operation that is exposed by the role. Build: Error

"id" Each operation JSON object must have an "id" attribute, with a value that is unique and a non-null String. Build: Error

"script" Each operation JSON object must have a "script" attribute, with a value that is a non-null String. The value is the name of a script to execute, like operation1.py. The script name can be followed by a space and a parameter or method name. The script (.py file) referenced in operation.json must exist. Build: Error

"attributes" Each operation JSON object can have an "attributes" attribute, which must be a JSON array of JSON objects, or null for no attributes. Each of these attribute JSON objects must represent valid attributes. Build: Error

"preset_status" Each operation JSON object can have a "preset_status" attribute, with a value that is a non-null String. The value must be one of these valid Role states:

INITIAL
INITIALIZING
INSTALLING
CONFIGURING
ERROR
FAILED
RUNNING
SAFERUNNING
STARTING
STOPPING
STOPPED
TERMINATING
TERMINATED
SUSPENDING
SUSPEND_SUCCESSED
SUSPEND_FAILED
CHECKPOINTING
CHECKPOINT_SUCCESSED
CHECKPOINT_FAILED
RESUMING
UNKNOWN
Build: Error
tweak.json
If the file exists, it must be a valid JSON array of tweak JSON objects. Each tweak JSON object must be a valid attribute and include an "id" attribute with a non-null String value. The tweak.json file can refer to attributes defined in metadata.json. The reference is made by using the "ref-id" attribute, and one of the "ref-component", "ref-link", "ref-policy" attributes. These attributes should all be non-null Strings. A valid ref tweak JSON object has 3 keys: "id" , "ref-id", and one "ref-component", "ref-link", or "ref-policy". Build: Error

"ref-id", "ref-xxxxx" The referenced attribute (ref-id) in the referenced component/link/policy must exist in metadata.json. Build: Error
Velocity templates /template/*.vm All .vm template file must use correct syntax Error


Each .xml file of a template transformer must reference a valid .vm template file. Error


Attribute validation


Validation criteria for attributes

Attribute Validated item Validation criteria Result if validation fails
All attributes id The value must be a non-null, non-empty String. Error

type The value must be a String. Only the following values are valid: string, boolean, number, range, file, array. Error
string regExp If there is a regExp attribute, it must be a String containing a valid Java. regular expression. Error

options

  • If there is an options attribute, it must be a non-empty JSON array that contains JSON objects.
  • Each JSON object must have "name" and "name" attributes that are Strings.
  • The name value cannot be null. It can be a String that represents a Kernel Services API call.

Error

sampleValue

  • If there is a sampleValue attribute, it must be a non-null String.
  • If "regExp" is specified, the value of sampleValue must match the regular expression defined by "regExp".
  • If "options" is specified, the sampleValue must be one of the options.

Error
boolean sampleValue

  • If there is a sampleValue attribute, it must be a valid boolean value: true or false JSON values.
  • A null value is interpreted as true.
  • A String of "true", ignoring case, is interpreted as true. Any other string value is interpreted as false.
  • Any other value type is invalid, including a number.

Error
file extensions If there is an "extensions" attribute, it must be a non-empty JSON array that contains Strings. Error
array entryType

An array must include one of the following attributes:

  • An "entryType" attribute. Only the following values are valid: string, boolean, number, range, file, array.
  • Both a "referenceKey" attribute and a "referenceType" attribute.

These attributes must be non-null Strings.

Error
number min,max The min and max attributes are optional. If specified, the following requirements apply:

  • The value must be a Long number.
  • If both min and max attributes are specified and have valid values, the min value must be less than or equal to the max value.

Error

sampleValue If there is a "sampleValue" attribute, it must be a valid Long number. Null is a valid value. The sample value must be:

  • Greater than or equal to a specified min value.
  • Less than or equal to a specified max value.

Error
range min, max The min and max attributes are required.

  • For each attribute, the value must be a Long number.
  • The min value must be less than or equal to the max value.

Error

sampleValue If there is a "sampleValue" attribute, it must be a JSON array of two Long numbers.

  • The sample minimum value must be greater than or equal to a specified min value.
  • The sample maximum value must be less than or equal to a specified max value.
  • The first sample value must be less than or equal to the second sample value.

Error


Pattern type validation


Validation criteria for pattern types

Object Validated item Validation criteria Result if validation fails
All .json files *.json The file must contain well-formed JSON syntax. This validation occurs only at build time. Error
patterntype.json name This validation occurs only at build time.

  • Must be a non-empty string
  • Cannot contain a forward slash (/)

Error

version This validation occurs only at build time. The pattern type version must be a non-empty string and match the regular expression [\d+(\.\d+){3}]. The format of the version number must be N.N.N.N. For example, a valid version number is 1.0.0.3. Error

firmware Optional. This validation occurs only during import of the pattern. The value must be a string that is listed in VRMF format. For example, a valid firmware number is 3.1.0.7. The PLATFORM.FIRMWARE_LEVEL reported by the platform must be equal to or greater than the specified firmware value. Error


Plug-ins for development

The Plug-in Development Kit includes several plug-ins that you can install to assist you with developing, testing, and troubleshooting your plug-ins.


Debug plug-in

The debug plug-in, pdk-debug, provides support for debugging Python installation and lifecycle scripts and topology documents in a plug-in. It does not replace or support debugging of Java. code in plug-ins.

The pdk-debug plug-in is included in the IBM Plug-in Development Kit.

Before you can use the debug plug-in, you must import it into the IBM PureApplication System W1500 development environment.

You can add a debug component to an application model that you create in the Virtual Application Builder to provide more functions for debugging plug-ins. You can select from the following debug modes:

Mock deployment

Deployment of the virtual application generates artifacts in the Storehouse, but no resources are provisioned. You can specify a list of ServiceProvisioner types to run as a comma-separated list.

Use this option as the first debugging step after you write the plug-in and pattern type. You can use the Storehouse Browser to view your final topology document. In the workload console, click...

    System | Storehouse Browser User Deployments | deployment ID | topology.json | Get Contents

The deployment ID is in the Kernel Services (KS) console log, as follows:

[18/Aug/2011 21:19:54:447 +0000] INFO debug topology-only deployment stored topology document in 
https://172.16.33.10:9444/storehouse/user/deployments/d-02de78c2-6b26-4f32-80af-ba9083a481c4/topology.json

Deployment for manual debugging

The virtual application is deployed normally, but input and output data from all scripts is saved. You can SSH into the deployed virtual machines to see the data. The input and output data for a particular script is saved in the directory where the script ran. You can then perform manual debugging. You can change data and scripts, and then manually run scripts to see the effects of your changes. For example, if a script has a syntax or logic error, you can correct it, and test your fix.

You can also choose how you want to handle sensitive data during deployment. If you select the Record sensitive data check box, sensitive data is logged. This option is disabled by default.

To add the debug component to an existing virtual application:

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Click Patterns > Virtual Applications.

  3. Select a virtual application pattern and click Open on the toolbar.

  4. Click Debug listed under Other Components and drag the icon to the Virtual Application Builder canvas. The properties panel for the database component displays to the right of the Virtual Application Builder pane. For details on the properties panel settings, view the help by selecting the help icon on the properties panel.

  5. Select the debug mode to use.

  6. Edit the virtual application pattern, as required.

  7. When you are ready to do your testing, deploy the virtual application pattern.

    • If you configured the debug component with the Mock deployment option, see Deploy mock deployments for more information.
    • If you configured the debug component with the Deployment for manual debugging option, deploy the virtual application pattern like a regular deployment. Ensure that you set up SSH keys for the deployment so that you can connect to the virtual machines for manual debugging.

You added the debug component to an application.

You can edit settings for the pdk-debug plug-in later as needed. When you are done with your debugging, you can remove the debug component from the application model.

You can also view properties for a component outside of the Virtual Application Builder by looking at the information for the associated plug-in. Click Cloud > System plug-ins. Search for the plug-in to view and then click the plug-in name in the list.


Deploy mock deployments

The purpose of a mock deployment is to test the process to transform the application model that is created in Virtual Application Builder into the topology document used to deploy virtual machines.

About this task

A mock deployment generates artifacts in the Storehouse, but no resources are provisioned. After you initiate a mock deployment, you can view the artifacts in the Storehouse to ensure that all the transformations that you expected are properly constructed in the topology.json file.

Procedure

After you create the virtual application pattern with the debug component configured for mock deployment, perform the following steps:

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Click Patterns > Virtual Applications.

  3. Select the virtual application pattern to deploy.

  4. Click Deploy on the toolbar.

  5. Specify the target for the deployment, and then click OK. The deployment process starts.

  6. View the virtual application instance to obtain the deployment ID.

    1. Click Instances > Virtual Applications.
    2. Select the virtual application instance. The details for the instance are displayed in the detail pane, including the deployment ID. The status for the mock deployment remains as launching since resources are not provisioned for a mock deployment.

  7. To view the topology document, open the Storehouse Browser.

    1. Click System > Storehouse Browser.

    2. Expand user > deployments > deployment_id, where deployment_id is the deployment ID you obtained from the Virtual Application Instances page.

    3. Select topology.json. The topology document is displayed in your browser.

      You can use a browser add-on such as JSONView to browse the JSON document. JSONView is available for Firefox and Chrome browsers.


Unlock plug-in

To facilitate plug-in development, you can use the plugin-unlock plug-in to delete a plug-in used by a virtual application instance so that you can easily replace the plug-in with an updated version and activate the new plug-in on existing virtual machines instead of redeploying a new copy of the application.

The plugin-unlock plug-in is for development environments only to facilitate testing of plug-ins that are being developed. It is not intended for production environments and should not be installed in a production environment.

The unlock plug-in is included in the IBM Plug-in Development Kit.

In a normal IBM PureApplication System W1500 environment, a plug-in cannot be deleted if it is being used by a deployed virtual application. For example, if a virtual application uses a plug-in called custom.plugin at the version level 1.2.3.4, you cannot delete version 1.2.3.4 of custom.plugin from the system. Locking the plug-in in this way is important for the integrity and stability of virtual applications in a production environment. If the plug-in is deleted and the deployed application needs to scale up or recover from a failure, the absence of the plug-in can result in application failure.

In a development environment, however, the ability to delete and replace a plug-in is useful because it significantly reduces the time that is required to test updates to a plug-in. For example, if a deployed application is using custom.plugin version 1.2.3.4 and you want to test a bug fix or new feature that you added to the plug-in, you delete the custom.plugin version 1.2.3.4 plug-in first, import your new modified version of it, and then activate it on deployed virtual machines in the virtual application instead of deploying a new copy of the virtual application.

To use the plugin-unlock plug-in for testing:

  1. Deploy the virtual application with the existing plug-in with a known SSH key. An SSH connection is required to activate the new version of the plug-in that you test.

  2. Update the code for the plug-in you are developing, and build the plug-in without changing the version number of the plug-in.

  3. Import the plugin.unlock plug-in.
  4. Delete the existing plug-in version from the PureApplication System system.

  5. Import the plug-in that you updated and built.
  6. Apply the changes to the virtual machine.

    1. Connect to the virtual machine by using SSH to access the command line.

    2. Stop the agent and deployment inlet.

      killall java
      

    3. Remove installed artifacts with the following commands:

      rm -rf /opt/IBM/maestro
      cd /0config
      rm -rf 0config.log backup/ cert.tmp/ debug/ doneconfig download.zip exec_vm_tmpl/ itlm/ lafiles/ logging/ monitor/ nodepkgs/ properties/ start/
      

    4. Perform any operations required to put the virtual machine in a state that is ready for a fresh activation of the plug-in. These operations can include stopping processes or removing files or other content that the plug-in installs on the virtual machine.

    5. Restart the virtual machine and activate the plug-in with the following command:

      /0config/0config.sh
      
      The node reboots and restarts itself, as if it were a newly deployed node. It downloads the topology document, and then takes the node parts, parts, and roles through all the lifecycle events by running all the lifecycle startup scripts. Your newly installed version 1.2.3.4 of custom.plugin is used by the application. You can restart as many virtual machines as necessary to properly test your plug-in code.

  7. Update your plug-in again as necessary by repeating steps 4-6 to apply the changes you made.

When you are finished with your testing, delete the plugin.unlock plug-in.


Develop plug-ins in Eclipse

The Plug-in Development Kit (PDK) includes an Eclipse plug-in that you can use in your Eclipse or Rational Application Developer environment to create and edit some of the configuration files in the plug-ins used in the virtual applications.


Overview

You can use virtual application patterns to add third-party software to existing platform in an efficient, reusable way by developing the corresponding plug-in for it. Developing a plug-in for your platform requires an understanding of the conventions, configuration, and details for your platform. You typically must edit your configuration files in your preferred text editor.

The Plug-in Development Kit (PDK) includes an Eclipse plug-in that you can use to help develop the plug-ins used in the virtual application patterns:

You can also access the Plug-in Development Kit documentation within the Eclipse IDE by clicking Help > Help Contents.

Use this Eclipse plug-in, your Eclipse IDE becomes a virtual application plug-in development environment. Developing workload plug-ins is easier with these plug-in development features and the power of the Eclipse IDE and other third-party plug-ins, such as PyDev.


Prerequisites

To use this Eclipse plug-in, verify that your environment includes the following prerequisites:

The Java Platform, Enterprise Edition version is recommended.


Install the Eclipse plug-in

You can install the Eclipse plug-in into your Eclipse environment by completing the following steps:

  1. Start Eclipse to load the Eclipse SDK workspace.
  2. Click Help > Install New Software. The Available Software window is displayed.
  3. Click Add to add a software repository. The Add Repository window is displayed.
  4. Click Archive to define a local Eclipse repository archive.
  5. Go to the directory where you installed and unpacked the Plug-in Development Kit.
  6. Select the compressed file com.ibm.maestro.plugin.pdk.site.zip and click Open.
  7. In the Name field of the Add Repository window, specify a name for the new repository archive, such as IBM pureScale Eclipse plug-in.
  8. Click OK.
  9. After a moment, the Name area is updated to display the name IBM Workload Plugin Development Kit.
  10. Click Select All.
  11. Clear the following check box: Contact all update sites during install to find required software.
  12. Click Next.
  13. Wait for the installation to complete.


Create a project

You can create a virtual application plug-in project in Eclipse by using the following procedure:

  1. In your Eclipse environment, switch to the Workload Plug-in Development Perspective, if needed.
  2. Click File > New > Project to start the New Project wizard.
  3. Select IBM Workload Plug-in Development > IBM Workload Plug-in Project.
  4. Click Next. The Create an IBM Workload Plug-in Project window is displayed.
  5. In the Project name field, specify a name for your new project. Notice that the Plug-in name field is automatically populated with the same project name. However, you can change it to another name if preferred. The project name is the Eclipse project name, and the plug-in name is the name of this plug-in that you are developing.
  6. The Generate project skeleton check box is selected by default.

    • If this check box is selected, the wizard creates a number of files and folders automatically, which are needed by most plug-in projects.
    • If you clear this check box, the wizard creates an empty project that includes only the config.json file. You must add other files manually as needed.

  7. Click Finish. The new project is created and added to the list of projects in the Project Explorer tab.


Import an existing project

You can import an existing plug-in project into your Eclipse plug-in development environment. This existing project package must be in the standard tar.gz or .tgz format. Import existing plug-in project by using the following procedure:

  1. In your Eclipse environment, switch to the Workload Plug-in Development Perspective, if needed.
  2. Click File > Import to start the Import wizard.
  3. Select IBM Workload Plug-in Development > IBM Workload Plug-in Package.
  4. Click Next. The Import an IBM Workload Plug-in Package window is displayed.
  5. Click Browse to browse to the location of your plug-in package file.
  6. Select the plug-in package file to import. The selected file path and name are displayed.
  7. Click Next. A second wizard page is displayed.
  8. The name of the project is initially set to the name of your plug-in the Project name field. Typically you do not need to change this name, but you can if preferred.
  9. The default project location is in the default workspace location of your Eclipse environment, but you can change that if preferred.
  10. Click Finish. The import wizard extracts the package and adds the project to the list of projects in the Project Explorer tab.


Pattern type project overview

Pattern types are a collection of plug-ins. The plug-ins contain the components, policies, and links of the virtual application pattern. To open the pattern type project overview page, double-click the patterntype.json file, or right-click the file and select Open.

The pattern type project overview page shows the following information about the pattern type:


Plug-in project overview

The plug-in project includes the following key artifacts:

src folder

You can create your Java code in this directory.

build folder

This folder contains the build.plugin.xml file and Ant JAR files for building the plug-in.

i18n folder

This folder contains the globalization files.

lib folder

This folder contains all of the JAR files that are needed to compile the plug-in project.

META-INF folder

This folder contains metadata files.

OSGI-INF folder

This folder contains OSGi configuration files.

Plug-in folder

This folder is the plug-in configuration directory, which includes the config.json plug-in configuration file, and the following folders:

  • appmodel
  • nodeagents
  • parts
  • templates

build.xml

This Ant build file is needed to build the plug-in project.

You can add content to these artifacts to develop your plug-in, in the following areas:


Manage the config.json plug-in configuration file

The config.json file contains the plug-in configuration.

You can modify the config.json file with the Config JSON Editor:

  1. In the Project Explorer view, expand the plug-in project node.
  2. Expand the plugin folder node.
  3. Double-click the config.json file, or right-click the file and select Open with > Config JSON Editor.

The Configuration view is displayed by default, showing the contents of the config.json in a convenient form-based user interface. Using this interface, you can do the following tasks:

Values that you specify are immediately validated, and errors (such as missing required fields, duplicate IDs) are flagged (with help tips) for correction.

In some cases, you might prefer to modify the configuration text directly, by clicking the Source tab to present the configuration information in a formatted text editor. Changes that are made in either the Configuration user interface or the source text editor are immediately reflected in both views.


Manage the metadata.json component property UI configuration file

The metadata.json file defines the components, policies, and links between specific source and target component types for the virtual application builder. Other files are referenced from the metadata.json file, such as image files for icons to display in Virtual Application Builder. The metadata describes how Virtual Application Builder visually displays components, links, and policies and determines which configuration parameters are exposed to users.

You can modify the metadata.json file with the metadata JSON Editor:

  1. In the Project Explorer view, expand the plug-in project node.
  2. Expand the plugin folder node.
  3. Expand the appmodel folder node.
  4. Double-click the metadata.json file, or right-click the file and select Open with > Metadata JSON Editor.

The Overview page is displayed by default, showing a list of metadata elements that you can use to create a virtual application pattern. Using this form-based interface, you can do the following tasks:

Values that you specify are immediately validated, and errors (such as missing required fields, duplicate IDs) are flagged (with help tips) for correction.

In some cases you might prefer to modify the configuration text directly, by clicking the Source tab to present the configuration information in a formatted text editor. Changes that are made in either the Application Model user interface or the source text editor are immediately reflected in both views.


Manage the operation.json configuration file

The operation.json file defines the operations that can be called when the virtual application is running. Operations are grouped by role, and each operation is associated with a Python script, which is provided by the plug-in. Users request the operation on the Virtual Application Builder console. This request is forwarded by PureApplication System, and any parameters are sent to the script on systems where the role is running.

You can modify the operation.json file with the Operation JSON Editor:

  1. In the Project Explorer view, expand the plug-in project node.
  2. Expand the plugin folder node.
  3. Expand the appmodel folder node.
  4. Double-click the operation.json file, or right-click the file and select Open with > Operation JSON Editor.

The Operation view is displayed by default, showing a list of operations. The name of the operation is the concatenation of the ID and role values that are associated with the operation. Using this form-based interface, you can do the following tasks:

Values that you specify are immediately validated, and errors (such as missing required fields, duplicate IDs) are flagged (with help tips) for correction.

In some cases you might prefer to modify the configuration text directly, by clicking the Source tab to present the configuration information in a formatted text editor. Changes that are made in either the Operation user interface or the source text editor are immediately reflected in both views.


Manage the tweak.json configuration file

The tweak.json file defines the configuration parameters that can be changed on the running deployment. The parameters in this file depend on an associated configuration operation defined in the operation.json file.

You can modify the tweak.json file with the Tweak JSON Editor:

  1. In the Project Explorer view, expand the plug-in project node.
  2. Expand the plugin folder node.
  3. Expand the appmodel folder node.
  4. Double-click the tweak.json file, or right-click the file and select Open with > Tweak JSON Editor.

The Tweak view is displayed by default, showing a list of configuration parameters. The name of the parameter is the concatenation of the role and ID values that are associated with the parameter. Using this form-based interface, you can do the following tasks:

Values that you specify are immediately validated, and errors (such as missing required fields, duplicate IDs) are flagged (with help tips) for correction.

In some cases you might prefer to modify the configuration text directly, by clicking the Source tab to present the configuration information in a formatted text editor. Changes that are made in either the Tweak user interface or the source text editor are immediately reflected in both views.


Manage OSGi service components

PureApplication System provides limited support for Open Services Gateway initiative (OSGi) services within plug-ins. Plug-ins can provide implementations of specific service interfaces:

You can create a new plug-in OSGi service component by completing the following steps:

  1. In the Project Explorer view, expand the plug-in project node.
  2. Right-click the OSGI-INF folder.
  3. Select New > OSGI Service Component.

The OSGI Service Component window is displayed. Specify the information for your service component in the fields provided. The fields change depending on which service type you select. When you click Finish, the new service component is added to the OSGI-INF folder in the plug-in. Associated Java source files are generated automatically, and the MANIFEST.MF file is updated automatically.

Values that you specify are immediately validated, and errors (such as existing names or files) are flagged (with help tips) for correction.


Manage plug-in node parts

Node parts are installed by the activation script and generally contain binary files and scripts to augment the operating system. You can create new node parts for your plug-in, including the directory structure and optionally include a script stub, such as setup.py.

You can create a node part by completing the following steps:

  1. In the Project Explorer view, expand the plug-in project node.
  2. Right-click the nodeparts folder.
  3. Select New > Plug-in Node Part.

The Plug-in Node Part window is displayed. Specify a name for your node part, and indicate whether to generate only the necessary directory structure, or create the directory structure and available script stubs. When you click Finish, the new node part is added to the plugin/nodeparts folder in the plug-in.

Values that you specify are immediately validated, and errors (such as existing names or files) are flagged (with help tips) for correction.

After you create your plug-in node parts, you can edit the associated Python scripts with your preferred editor.


Manage plug-in parts

Plug-in parts are installed by the workload agent and generally contain binary and lifecycle scripts that are associated with roles and dependencies. You can create new parts for your plug-in, including the directory structure and optionally include one or more script stubs, such as install.py and uninstall.py.

If you created a new plug-in project and selected the Generate project skeleton check box, the resulting project is created with a plugin/parts folder that contains a set of scripts by default. You can add new plug-in parts to this folder.

Create a new plug-in part by completing the following steps:

  1. In the Project Explorer view, expand the plug-in project node.
  2. Right-click the parts folder.
  3. Select New > Plug-in Node Part.

The Plug-in Part window is displayed. Specify a name for your part, and indicate whether to generate only the necessary directory structure, or create the directory structure and select one or more available script stubs. When you click Finish, the new part is added to the plugin/parts folder in the plug-in.

Values that you specify are immediately validated, and errors (such as existing names or files) are flagged (with help tips) for correction.

After you create your plug-in parts, you can edit the associated Python scripts with your preferred editor.


Manage plug-in roles

A plug-in role represents a managed entity within a virtual application instance.

You can create new roles for your plug-in, including the directory structure and optionally include one or more script stubs, such as install.py, configure.py, and start.py.

If you created a new plug-in project and selected the Generate project skeleton check box, the resulting project is created with a plugin/parts folder that contains a number of role scripts by default. You can add new plug-in roles to this folder.

Create a new plug-in role by completing the following steps:

  1. In the Project Explorer view, expand the plug-in project node.
  2. Right-click the parts folder.
  3. Select New > Plug-in Role.

The Plug-in Role window is displayed. Specify a name for your role, and indicate the part in which to create the role. If the specified part name does not exist, it is created for you. Indicate whether to generate only the necessary directory structure, or create the directory structure and select one or more available script stubs. When you click Finish, the new role is added to the plugin/parts/<project>.scripts/scripts folder in the plug-in.

Values that you specify are immediately validated, and errors (such as existing names or files) are flagged (with help tips) for correction.

After you create your plug-in roles, you can edit the associated Python scripts with your preferred editor.


Decorations for files and directories

As a visual aid to plug-in developers, certain decorations are dynamically added to files and folders of plug-in contents to help you identify things easier:

P

Designates a plug-in part.

N

Designates a plug-in node part.

R

Designates a plug-in role

Check mark

Designates standard scripts for node parts and parts.

*

Designates a role lifecycle script.


Build the plug-in

When you are ready to build and export your plug-in, complete the following steps:

  1. In the Project Explorer view, right-click the plug-in project node.
  2. Select IBM Workload Plug-in > Build to build the plug-in package without Java source code. Alternatively, you can select IBM Workload Plug-in > Build with source to build the plug-in package with Java source code.
  3. The build is started. Examine the Console view that is opened to check the progress of the build and view the resulting log messages.
  4. A new export folder is created, including the plug-in package with the name format <ID>-<Version>.tgz and more source files. This exported package can later be imported back into your Eclipse environment and modified as needed.


Install plug-ins

While in your Eclipse environment, you can upload your packaged plug-in to a remote PureApplication System environment for testing and debugging:

  1. If needed, modify your Eclipse environment preferences (select Window > Preferences > IBM Pattern Toolkit) to specify the system access information for the remote PureApplication System environment, including the PureApplication System IP address, and a valid user ID and password.
  2. Ensure that your project was built successfully.
  3. In the Project Explorer view, right-click the plug-in project node.
  4. Select IBM Workload Plug-in > Install/Update to deployer. The installation process is started.


Removing plug-ins

While in your Eclipse environment, you can remove an installed plug-in from a remote PureApplication System environment:

  1. If needed, modify your Eclipse environment preferences (select Window > Preferences > IBM Pattern Toolkit) to specify the system access information for the remote PureApplication System environment, including the PureApplication System IP address, and a valid user ID and password.
  2. In the Project Explorer view, right-click the plug-in project node.
  3. Select IBM Workload Plug-in > Remove from deployer. The uninstallation process is started.


Limitations

This Eclipse plug-in development environment support does not support all configuration tasks that you might need to perform on your plug-in. Use your preferred text editor to configure any tasks that are not supported by this Eclipse plug-in directly in the plug-in configuration files.


Other reference

See the related links for references to plug-in development for virtual application patterns, the Plug-in Development Kit (PDK), and associated Java and Python script documentation that is provided with the PDK.

JSON Formatter and Validator

JSONLint: The JSON Validator

Python documentation: Subprocess management


Other configuration options

This section describes additional plug-in configuration options that are available.


Stop operations

In a stop.py Python script or stop.sh shell script, you can use the isDestroy method to determine if the virtual application should be stopped only or if the stop operation is intended for deleting the application and destroying virtual machines. This verifies if an isDestroy flag is set for the virtual application.


stop.py

In a part stop.py or a node part stop.py, use the following commands to check the purpose of the stop operation and perform the appropriate action.

The isDestroy method is supported in version 2.0.0.4 or newer of the workload agent.

Confirm that the isDestroy method is available, then use the isDestroy method to perform different actions depending on whether the isDestroy flag is set or not. In this example, the result of the method is simply logged.

if 'isDestroy' in dir(maestro):
  if maestro.isDestroy():
      logger.info("Destroy the virtual application")
  else:
      logger.info("Stop the virtual application")


stop.sh

In a node part stop.sh, use the following commands to check the purpose of the stop operation and perform the appropriate action.

IBM Foundation Pattern 2.0.0.3 or newer is required. Source the common.sh script.

. common.sh

Confirm that the isDestroy method is available, then use the isDestroy method to perform different actions depending on whether the isDestroy flag is set or not. In this example, the result of the method is simply logged.

if [ "$(type -t isDestroy)" = "function" ]; then   if [ "$(isDestroy)" == 'true' ]; then     echo "Destroy the virtual application"
  else
    echo "Stop the virtual application"
  fi
fi


Create your own database plug-in

If you want to develop a plug-in to support your own database, you can create one based on the wasdb2 plug-in.

The wasdb2 plug-in includes parts for connections with an existing DB2 or Informix database. It also supports connections with a pattern-deployed DB2 database, meaning a virtual machine running DB2 is deployed in the same virtual application as the one running IBM WAS. You can choose to include either or both implementations in your custom plug-in.

For a pattern-deployed database, there are two virtual machines, one running WAS with the user's application, and the other running DB2 to manage data for the application. In the existing database case, the database is already up and running on another system, either in the cloud, as a shared service, or outside the cloud that is managed by IBM PureApplication System W1500. In either case, WAS needs to have the IP address, port number, database name, and database credentials (user ID and password) to connect to the database. In the application model, the WAS node and the database node are modeled as components. The link between them in the application model represents the connection between them, and provides the foundation for this data transfer of required access information.


Pattern-deployed database connection

Roles provide capabilities to orchestrate application startup, lifecycle management, and undeployment. For a database deployed with the IBM Web Application Pattern, a WAS role manages and interacts with WAS instance deployed on its node, and a DB2 role interacts with the DB2 instance deployed on its node. The wasdb2 plug-in provides a link between the WAS and DB2 components. It inserts a dependency of the WAS role on the DB2 role in the topology document. At the start of the deployment, when both the WAS and DB2 roles change to the RUNNING state, the WAS/DB2/changed.py script runs. The DB2 role lifecycle scripts export DB2 characteristics, like the hostname/IP address, port number, database name, user ID, and password required to use the DB2 database on this deployed instance. The WAS/DB2/changed.py script gets this exported data, and passes the values into wsadmin scripts to configure the information so that applications in the WAS node can access the database.

Roles can also contribute operations to the deployment inlet. These operations can be used to modify the running deployment. For example, you can change the password of your DB2 database. The DB2 plug-in offers this operation, which changes the database password and exports this changed data. The WAS/DB2/changed.py script is notified on this update, and invokes a wsadmin script to update the changed password in the WAS instance that is using the database.


Existing database connection

Web Application Pattern supports connecting to two existing database types: DB2 or Informix. It uses a role for each: xDB2 and xInformix. These roles work like the DB2 role used for a pattern-deployed database. An application model component is available on the Virtual Application Builder pane for each type of database, with hostname/IP address, port number, database name, and user ID and password attributes. These values are specified at virtual application design time. A link transform makes the WAS role dependent on these roles for existing databases. The xDB2 role start.py script exports the values that are specified in xDB2 component on the Virtual Application Builder pane, by using the same mechanism and key names as the DB2 role. The existing database roles offer configuration settings and deployment inlet change operations to dynamically change these configuration values just like the DB2 role. As with a pattern-deployed DB2 database, WAS/xDB/changed.py scripts get the exported xDB values (IP address, port number, database name, user ID, and password) and invoke appropriate WAS configuration scripts so that the applications on the WAS node can access the database.

Two attributes can be added to dependent roles.

"asDependency" : "DB"
Normally, each dependent role provides its own dependent role scripts. The wasdb plug-in provides these scripts for all databases, and they are delivered as WAS/DB scripts. While the topology document has the WAS role, which is dependent on the appropriate DB role (DB2, xDB2, or xInformix), the "asDependency" attribute maps all dependent role script calls to WAS/DB, for example for changed.py. Database-dependent information, unique to each database, is passed to the wasdb link in a dblink_metadata JSONObject.
"localOnly" : true
This attribute is used in the existing resource or surrogate role cases to indicate that this role is local to the WAS node. It is especially important with scaling to invoke WAS/DB/changed.py only once per local WAS node. The next section describes surrogate roles.

The wasdb plug-in contributes a WASDB link to the Web Application Pattern pattern type. The source component is a WAS node (vm-template). The target component is a JSONObject with two elements:

dblink_metadata (required
A JSONObject with two elements:

packages (optional)
A JSONArray of package names to be installed on the WAS node. The packages are added in the usual way to the $sourcePackages variable in the wasdb_link.vm velocity template.
parms (required)
The database-specific parameters required by the scripts to configure Web Application Pattern to connect to the database
role (optional)
The role to insert into the WAS template. It is also made a dependent role to the WAS role in the source WAS template, as is usual in Velocity template link transforms. The role element is used in the existing database case, for xDB2 or xInformix.

A pattern-deployed database does not need an extra role. It uses the DB2 role for the WAS role to depend on. A targetRole parameter is included in the dblink_metadata parms element for a pattern-deployed database. Its value is the name of the pattern-deployed DB role of the dependency added to the WAS role.


Use a surrogate role

The surrogate role is an option if your database pattern deployment plug-in does not provide a role that closely mimics the DB2 role, or the data it exports does not use the same names as the DB2 role and our xDB roles. A surrogate role is added to reflect status and changes in the target database role back to the wasdb plug-in in the manner it expects. For example, a database that is called DB3 has a DB3 role, that we want to use with our wasdb plug-in. We create a role, DB3Surrogate, that depends on the DB3 role. It has a DB3Surrogate/DB3/changed.py script that gets changed, exported data from DB3. The WAS role is dependent on the DB3Surrogate role, which exports changed data from DB3, convert it to names and formats that are expected by wasdb, and export it in names wasdb expects. To realize this configuration with WASDB link, targetRole in dblink_metadata parms would be DB3, and the DB3Surrogate role would be passed in as the role element.


Configure plug-ins for pattern type upgrades

When a user installs a new version of a pattern type, the user can choose to upgrade an existing virtual application instance with the new pattern type version. A plug-in can include scripts to back up data before an upgrade is started in case an upgrade fails.

When a user requests an upgrade, the system performs the following actions:

The following diagram shows the state transitions for a virtual application instance, including the suspending and checkpointing states for an update.

Suspend scripts are used to suspend the roles and node parts, checkpoint scripts are used to back up the roles and node parts, and resume scripts are used to bring the role back to RUNNING status. These scripts run during an upgrade if they exist in the plug-in.

  1. When the user requests an upgrade, the command triggers the creation of an "update" file to indicate that an update is required when the virtual machine is rebooted.
  2. The role status is set to SUSPENDING.
  3. The following scripts run, and then the role moves to SUSPEND state.

    1. {role}/pending_suspend.py
    2. {role}/dep/suspend.py
    3. {role}/suspend.py

    The scripts set the role status to either: SUSPEND_SUCCESSED or SUSPEND_FAILED. If a recoverable error occurs, the scripts must include logic to recover from the error. If an unexpected error (an exception) occurs that makes the role unrecoverable, the role state moves to ERROR.

  4. When all results from each role in the deployment are returned, the next step occurs:

    • If any role status changes to SUSPEND_FAILED, the update stops and the role state is set to RUNNING and the following scripts run:

      1. {role}/pending_resume.py
      2. {role}/dep/resume.py
      3. {role}/resume.py

    • If all roles have a status of SUSPEND_SUCCESSED, the role target state is set to CHECKPOINT.

  5. The role status is set to CHECKPOINTING.
  6. The following scripts run, and then the role state moves to CHECKPOINT.

    1. {role}/pending_checkpoint.py
    2. {role}/dep/checkpoint.py
    3. {role}/checkpoint.py

    The scripts set status to either: CHECKPOINT_SUCCESSED or CHECKPOINT_FAILED. If a recoverable error occurs, the scripts must include logic to recover from the error. If an unexpected error (an exception) occurs that makes the role unrecoverable, the role state moves to ERROR.

  7. When all results from each role in the deployment are returned, the next step occurs:

    • If any role status changes to CHECKPOINT_FAILED, the update stops and the role state is set to RUNNING and the following scripts run:

      1. {role}/pending_resume.py
      2. {role}/dep/resume.py
      3. {role}/resume.py

    • If all roles have a status of CHECKPOINT_SUCCESSED, the role target state is set to TERMINATED.

  8. In TERMINATED, the scripts run for stopping and rebooting the virtual machines. Activation steps are similar to a fresh deployment:

    • The 0config.sh script and activation scripts run. The "update" file is detected so that the activation proceeds as an upgrade.
    • The working directory from the previous activation is deleted (/0config/nodepkgs) so a clean activation can occur. Artifacts that are installed outside the working directory can be affected by plug-in scripts, but not the 0config.sh script.
    • The new topology document for the deployment is parsed for each virtual machine vm-template. For each node part in the vm-template:

      1. Download the node part .tgz file and extract into the {nodepkgs_root} directory.
      2. Run {nodepkgs_root}/setup/setup.py, if the script exists. Associated parms from the topology document are available as maestro.parms.
      3. Run the update scripts {nodepkgs_root}/common/install/*.sh|.py in ascending order.
      4. Run the start scripts {nodepkgs_root}/common/start/*.sh|.py in ascending order.
      5. Delete {nodepkgs_root}/setup/.

    • The workload agent processes parts and roles. For each part in the vm-template:

      1. Download the part .tgz file and extract the contents.
      2. Run install.py with parameters from the topology document.
      3. Update roles concurrently by running lifecycle scripts.

        • {role}/install.py then all {role}/{dep}/install.py
        • {role}/configure/py then all {role}/{dep}/configure.py
        • {role}/start.py
        • {role}/dep/changed.py

You can use the following methods to check the update status:

Figure 1. Python example

if maestro.isUpdate():
print 'Handling update.'
# To exit early:
# sys.exit(0)

Figure 2. Shell script example

# Source common.sh for isUpdate utility
. common.sh
 
if [ "$(isUpdate)" == 'true' ]; then echo 'Handling update.'
# To exit early:
# exit 0
fi


Product-specific settings for pattern types and plug-ins

There are multiple IBM products that support virtual application patterns. To support situations where requirements are different for each product, pattern types and plug-ins can contain product-specific conditions. If you are using pattern types and plug-ins that are provided by IBM to develop your own plug-ins and pattern types, you need to understand how this capability works.

The following examples show different situations where product-specific configuration can be defined. The following products can be specified:

IWD
For IBM Workload Deployer V3.1.0.2 or later

IPAS
For IBM PureApplication System


Configuration options for pattern types

Installation

You can specify that a pattern type is only installed for a specific product in patterntype.json

In this example, the pattern type is only installed if the value of products is IWD.

{
    "name"        : "patterntype.hello",
    "shortname"   : "ptype.hello",
    "version"     : "2.0.0.1",
    "description" : "DESCRIPTION",
    "builder"     : true,
     "prereqs":{
        "foundation":"2.0.0.1"
    },
    "requires":{
      "products":["IWD"]
    },
}

Licensing

IBM license IDs, license types, and the location of the license text are specified in patterntype.json for each product. The license text location is relative to the plug-in.

  • If a docroot value is not specified, the license text is in licenses/*.html.
  • If a docroot value is specified, the license text is in docroot/licenses/*.html. For example, if you set "docroot" : "iwddocs", then the license text is in iwddocs/licenses/*.html.

{
    "name"        : "patterntype.hello",
    "shortname"   : "hello",
    "version"     : "2.0.0.1",
    "description" : "DESCRIPTION",
    "builder"     : true,
    "resources" : [
        {
            "requires" : {
                "products" : ["IWD"]
            },
            "license" : {
                "pid"  : "5725D65",
                "type" : "PVU"
            },
            "docroot" : "iwddocs"
        },
        {
            "requires" : {
                "products" : ["IPAS"]
            },
            "license" : {
                "pid"  : "5725D66",
                "type" : "PVU"
            },
            "docroot" : "ipasdocs"
        }
    ],


Configuration options for plug-ins

Installation

You can specify that a plug-in is only installed for a specific product in config.json.

In this example, cachingExternal is only installed if the value of products is IWD.

  {
   "name":"cachingExternal",
   "version":"2.0.0.1",
   "patterntypes":{
      "primary":{
         "foundation":"2.0"
      }
   },
   "requires":{
      "products":["IWD"]
   },

Plug-in configuration defaults

In this example configuration of config.json, the settings for IBM PureApplication System W1500 in "defaults" override settings that are defined in "parms".

   "parms":{
      "dbaas_standard":true,
      "environment":null
   },
   "defaults":[
      {
         "requires":{
            "products":["IPAS"]
         },
         "parms":{
            "environment":"PROD"
         }
      }
   ],

Package definition

You can define packages in config.json that are specific to a product. The following example shows a package that is specific to IBM PureApplication System W1500.

"packages": {
    "Hello": [
        {
          "persistent":true,            
            "requires" : {
                "arch" : "x86_64",
                "memory" : 128,
                "products" : ["IPAS"]
            },
            "parts" : [
                {
                    "part" : "parts/hello.scripts.tgz"
                }
            ] 
        }
    ] 
}

Interface customization

You can make a user interface element available to a specific product only by specifying the product in the UI attribute definition in config_meta.json.

   {
      "id":"parms.environment",
      "label":"CONFIG_LABEL",
      "type":"string",
      "requires":{
         "products":["IWD"]
    },

Product name in lifecycle scripts

To include the product name within a Python (.py) lifecycle script, add the following line:

maestro.system['product']


Use a quorum leader to manage changes in a cluster

If a role is associated with a cluster, meaning more than one software instance, you can define one or more instances as the quorum leader to coordinate changes to all members the cluster.

For example, in the IBM WAS plug-in (plugin.com.ibm.was), the quorum number is set to 1 in the virtual machine template.

"roles": [
   {
      "plugin": "$provider.getPluginScope()",
      "name": "WAS",
      "type": "WAS",
      "quorum": 1,

This setting means that in a deployed WAS cluster, only one instance is elected as the quorum leader.

In the lifecycle scripts, maestro.peers can be used to obtain peer role information. If an instance in the cluster has a value of QUORUM for the key QUORUM, that instance is the quorum leader.

role = maestro.role
peers = maestro.peers
node = maestro.node
parms = maestro.parms
role_status = maestro.role_status
 
def isQuorum( r=role ):
    return r.get('QUORUM') == 'QUORUM'
 
logger.debug('quorum leader: %s', isQuorum())
 
if role_status != 'INSTALLING' and role_status != 'CONFIGURING' and role_status != 'INITIAL' \
    and not isQuorum(role) and not maestro.node['template']['scaling']['max'] == 1:
    for p in peers:
        if isQuorum(peers[p]):
            # update local SSL key
            break

When Secure Socket Layer (SSL) keys are generated for the WAS cluster, the keys must be synchronized across all instances that are associated with the WAS role. The command to generate the keys is run against the quorum leader only. The remaining instances receive notification about the change from the quorum leader, accept the change, and update their local SSL keys.

Maestro.role is a read/write Python dictionary for each role instance to store data. Data from all scripts that is stored in maestro.role is preserved when a virtual machine restarts so that this data can be accessed after the restart. Maestro.peers is a read-only dictionary that contains the role data for all the other peers, which are keyed by fully qualified role name. A fully qualified role name is vm-template-name.instance-id.role-name. maestro.node['id'] is the server name, which is vm-template-name.instance-id. If a peer role changes its role data, all its peers are notified by an invocation of their role/changed.py script.


Samples for the Plug-in Development Kit

Use these Samples to help you learn how to develop custom plug-ins. The plug-ins that you develop can be added to the catalog and the components, links, and policies you define can be used to build virtual applications.

Plug-ins and pattern types are built and packaged by using Apache Ant. The Plug-in Development Kit (PDK) provides Ant build (.xml) files for this purpose. These build files can run from the command line or from within Eclipse. Other development environments can work, but only command line and Eclipse are supported.

The Samples are included with the Plug-in Development Kit (PDK). Download the PDK to get started with Samples. You can also download the PDK from the IBM PureApplication System W1500 user interface Welcome page.

You must enable the PDK license before you can use the PDK.


Sample pattern types and plug-ins

There are eight plug-in projects included in the PDK package. These plug-ins show how to design an application model, configuration, virtual machine template, and Python lifecycle scripts of a plug-in.

A shared service sample is packaged with the hello sample pattern type and allows deployment of a simple shared service and its corresponding virtual application client.

In addition to these projects, the WAS Community Edition (WAS CE) sample includes a compressed file, pdk-wasce-ptype_1.0.0.0.zip, that contains an additional project to demonstrate a simple version of the WASCE plug-in.

Projects for the Hello sample

plugin.com.ibm.sample.hellocenter - HCenter component

This plug-in defines the HCenter component. It deploys a virtual machine that contains simple message center middleware named HelloCenter. HelloCenter opens port 4000 and listens to client requests, and generates and returns greeting messages. With this plug-in, you can learn how to write the lifecycle scripts, such as install.py, configure.py, start.py, and stop.py for middleware like HelloCenter. You can use the following scripts:

  • install.py: Downloads artifacts from storehouse and installs the middleware. If you want to download and extend the .tgz installation file from the storage server, use the function downloadx instead.
  • configure.py: Downloads the artifacts that are uploaded by the plug-in and configures the middleware, and opens the firewall to accept customer requests. Export its IP address.
  • start.py: Starts the HelloCenter server and changes the role status to Running.
  • stop.py: Stops the HelloCenter server.

You can also access the maestro module and use the logger to log your messages.

plugin.com.ibm.sample.hello - Hello component

This plug-in defines the Hello component. It deploys a virtual machine that contains a HelloCenter client. The client sends a request with the message sender identity to the Hello Center and tries to get the returned greeting message and display the message on the console. This plug-in accesses HelloCenter and must work with the HClink plug-in. This plug-in contains the following scripts:

  • configure.py: Logs the sender information.
  • start.py: Changes the role status to Running.

plugin.com.ibm.sample.hclink - HClink link

This link connects Hello and HCenter components. It provides the Hello client with the IP address of the HCenter server, which allows the client to send a request to the server. This link also has an attribute that specifies the receiver name of the greeting message. The plug-in is installed with the Hello plug-in in the same virtual machine. This plug-in contains the following scripts:

  • changed.py: This script runs only after the Hello and HelloCenter plug-in roles are in the Running state.

    This script checks to see whether the depended role HelloCenter exists and reads the transferred parameters from HelloCenter, which is exported in the HelloCenter configure.py script. The change.py script uses the HelloCenter IP address to access HelloCenter and prints the returned messages. This script also shows how to localize your messages.

plugin.com.ibm.sample.hlpolicy - HLPolicy policy

This plug-in defines a policy called HLPolicy. Its purpose is to show an example of a linked policy. Linked policies enable a plug-in to include a policy that adds attributes to an existing component defined within an existing plug-in. This policy extends the Hello component.

plugin.com.ibm.sample.sharedservice.client - Shared service client component

This plug-in provides the shared service sample client component. With this plug-in, you can learn how to write a simple client for a shared service. This plug-in includes the lifecycle scripts, a ServiceProvisioner, and an action script to drive methods on the shared service.

  • SampleServiceProvisioner.java: This class carries out required tasks when you connect and disconnect from the shared service. In this example, it retrieves information from the service registry that the shared service stored for use by its clients. For example, the shared service in this case stores its own IP address for the clients to use. This class can be used to perform any setup or teardown tasks required by the clients.
  • action.py: This script is triggered when an operation is called from the client. This script provides an example of how to access the shared service registry object from a script, and also how to invoke REST methods against a shared service.

plugin.com.ibm.sample.sharedservice.service - Shared service component

This plug-in provides the sample shared service component. With this plug-in, you can learn how to write a simple shared service. This plug-in includes the lifecycle scripts, a RegistryProvider, and documents that describe how to expose REST methods on the shared service.

  • SampleRegistryProvider.java: This class shows how to add more custom information to the shared service registry object, which can be retrieved by clients. In this example, the shared service IP address is added to the registry object which is then retrieved by the client.s ServiceProvisioner.
  • restapi.py: This script is triggered when the operation is started. You can modify this script to carry out different actions for operations.

Projects for the WAS Community Edition (WAS CE) sample

plugin.com.ibm.wasce-1.0.0.1 - WAS (CE) component version 1.0.0.1

This plug-in defines the full version of the WAS (CE) component. With the full version of this component, you can deploy a web archive (WAR) file and execute operation and configuration updates to the WAS CE server. This plug-in also contains a metric collector which collects data for monitoring, and a status checker which monitors the status of the server.

This plug-in contains the following scripts:

  • install.py: Downloads artifacts from the storehouse and installs the WAS CE server.
  • configure.py: Exports the attributes of the WAS CE server and passes them to dependant roles, such as the "ProxyClient" role that is provided in the wasce.routing.policy plug-in.
  • start.py: Starts the WAS CE server and runs the script that is provided by the server to deploy the WAR file.
  • stop.py: Stops the WAS CE server.

plugin.com.ibm.wasce.db2 - link

This plug-in defines a link that allows a user to link the WAS CE component to a DB2 component. When the virtual application is deployed, a DB2 data source is created on the WAS CE instance.

This plug-in contains the following scripts:

  • install.py: Copies the JDBC files required to access the DB2 database into the lib directory of the WAS CE server.
  • changed.py: This script is called when the status of the DB2 role status is updated. The WAS CE will deploy the WAR file after the DB2 role is changed to a RUNNING state.

plugin.com.ibm.wasce.routing.policy - Routing policy

This plug-in defines a linked policy (type 2), In a linked policy approach, a policy defined in one plug-in is used to extend the capabilities of an existing component defined in another plug-in. When deployed within a virtual application, this policy registers the WAS CE instance to the elastic load balancing proxy service (ELB). Users are then able to access the web application on the WAS CE cluster through the ELB instance.

For more information, see Elastic load balancing (ELB) proxy service.

This plug-in contains the following scripts:

  • start.py: Completes preparation work that is needed for the registration to ELB instance, such as opening ports.
  • changed.py: Handles the role status change of any WAS CE roles in the instance. It registers the WAS CE instance to the ELB instance if the WAS CE state is changed to RUNNING, and removes the registration from the ELB instance if the application is shut down.

plugin.com.ibm.wasce.scaling.policy - Scaling policy

This plug-in defines a linked policy. This plug-in does not contain any scripts. It scales the WAS CE instances in or out according to the metrics reported by the monitoring framework.

pdk-wasce-ptype_1.0.0.0.zip - compressed file that contains the plugin.com.ibm.wasce-1.0.0.0 project

You can extract this file and examine the project that it contains to see the code for a simple version of the WASCE plug-in. This plug-in defines the basic version of the WAS (CE) component. With the basic version of this component, you can deploy a web archive (WAR) file and execute operation and configuration updates to the WAS CE server.

This plug-in contains the following scripts:

  • install.py: Downloads artifacts from the storehouse and installs the WAS CE server.
  • start.py: Starts the WAS CE server and runs the script that is provided by the server to deploy the WAR file.
  • stop.py: Stops the WAS CE server.

With these plug-ins, you can learn the following tasks:

Application model and configuration, virtual machine template, and virtual machine template configuration samples are included with the plug-ins. You can design your artifacts by using the sample application model, configuration, virtual machine template, and virtual machine configurations as a guide, rather than creating them from scratch.

Sample Hello Application

This sample virtual application consists of Hello and HelloCenter components that are linked by and HCLink, with values for all required attributes. It is ready to deploy, with no virtual application construction required. It is included in the plugin.com.ibm.hello.samples plug-in project for ease of use so that you can start to use the hello pattern type quickly, and also to show how you can package and deliver your own sample virtual applications.

  • The patterntype.json file provides a sample on how to configure your pattern type.
  • The build.patterntype.xml file is used to build the pattern type into a package.

    Before you run this script, you must run the build.xml in the plugin.depends project to build all plug-ins in your workspace.

  • The license folder includes all license documents for all supported languages.
  • The locales folder stores all translated files for all supported languages for the message.json file.

Shared Service sample

The shared service sample is packaged with the hello sample pattern type and allows deployment of a simple shared service and its corresponding virtual application client.

  1. SampleService: This component deploys 1 or more virtual machines (VMs) which together comprise a shared service that listens to client REST requests, and generates and returns corresponding output.

    • The metadata.json file defines the shared service parameters which can be customized at deployment time. You can use this file to add your own parameters to the shared service.
    • The servicesample.vm file contains a service-registry object which identifies what RegistryProvider is to be used by this shared service, in addition to the standard vm-template object.
    • The servicesampleapi.json object describes all the REST methods that are exposed by the shared service. Use this object to determine how to call the appropriate methods if you write clients for the shared service. Key parameters for this JSON object are the type of REST method (PUT, GET, DELETE, and POST) and the methods required parameters. Those parameters are:

      • resource: The target resource. The first parameter in the URL that is passed in, and allows the shared service developer to divide methods into suitable groupings.
      • clientdeploymentexposure (optional): This parameter sets whether the shared service is allowed to be accessed by client VMs.
      • role: The middleware role on the shared service which this operation is triggered against.
      • script: The python script and its entry method that are called when the operation is invoked.
      • timeout (optional): Milliseconds to wait for this operation to complete.
      • pattern (optional): A pattern of more URL parameters made up of variables and constants that are separated by forward slashes. Variables are enclosed by curly brackets. The method matches only if it can also match the pattern of the URL used to invoke it.

  2. SampleClient: Use this component to build a shared service virtual application client by using the Virtual Application Builder which can send REST requests to the sample shared service.

    • The metadata.json file defines the parameters that can be customized when the application is initially built by using the Virtual Application Builder. You can use this file to add your own parameters to the client.
    • The clientsample.vm file contains the service-template object, which is critical to describe how this client connects to the shared service. Parameters from metadata.json can be passed in here as required; for example, to specify the shared service name, version or client version. The ServiceProvisioner used by the client is also specified in this file.
    • The operation.json file defines the operations that are exposed by the client. These operations are visible on the Manage > Operations tab. In this example, we see that the script that gets triggered is the action.py script.

In this example, the shared service and client are provided by individual plug-ins. In general, a plug-in might provide any number of components, links, and policies.

Hello Application Template

This sample virtual application template consists of Hello and HelloCenter components that are linked by HCLink, with values for some, but not, all of the required attributes. It is ready to deploy from the catalog with no virtual application construction required. At deployment time, a panel is shown where you enter any required attributes that are not set and change the values for other attributes. It is included in the plugin.com.ibm.hello.samples plug-in project for your convenience. Use this sample to see how to package and deliver your own virtual application templates.

Sample WAS CE Web application

This sample virtual application demonstrates the WAS CE plug-in. It is a simple application that allows a user to deploy a WAR file, with support for simple operation and configuration updates, and monitoring capabilities to support automatic scaling of the application. Use the URL http://[IP]:8080/sample to access the sample application after it is deployed.

The appmodel.json file represents the serialization of the model defined in the Virtual Application Builder user interface. Components (nodes) and links, along with user specified property values, are represented.

Sample WAS CE Java. EE Web application

This sample virtual application demonstrates the WAS CE DB2 plug-in. It deploys the DayTraderLite application, which is a Java EE Web application that simulates a stock trading system with WAS and DB2. Use the URL http://[IP]:8080/tradelite to access the sample application after it is deployed.

Sample WAS CE Web application with routing policy

This sample virtual application demonstrates the WAS CE plug-in with the routing policy applied. Before you deploy this virtual application, ensure that the ELB proxy service is started. Use the URL http://[IP]:8080/sample to access the sample application after it is deployed.

Sample WAS CE elastic scaling Web application

This sample virtual application demonstrates the WAS CE plug-in with the scaling policy applied. The virtual application instance scales out when the active thread count in the default thread pool on the WAS CE instance is larger than 10 and it scales in when the active thread count is less than 2. Use the URL http://[IP]:8080/sample to access this sample application after it is deployed.

In the sample virtual application, each component and link is provided by its own plug-in of the same name. In general, a plug-in can provide any number of components, links, and policies.


Set up the plug-in samples environment

Set up the environment for the plug-in Samples by building the sample pattern type. The following products are required before you set up the environment:


Build the sample plug-ins and pattern types from the command line

About this task

These steps show you how to build the sample plug-ins and pattern types from the command line.

Procedure

Build the plugin.depends project

  1. Type cd iwd-pdk-workspace/plugin.depends from the command line.

  2. In the plugin.depends project, run the build.xml Ant script with the following command:

    ant
    
    The command builds all the plug-ins in the workspace.

Build the Hello sample plug-ins and pattern type

  1. Change directory to the patterntype.hello project and run the build.patterntype.xml script. Type ant -f build.patterntype.xml. This command builds the pattern type.

  2. Change directory to the patterntype.hello project. A folder named export displays.

  3. Go to the root of the export folder. The .tgz pattern type binary file is located here. It is ready for installation into the catalog.

Build the WAS Community Edition sample plug-ins and pattern type

To use the WAS CE sample, you must first download the WAS CE binary file. Then, you can build the WAS CE plug-ins (from plugin.depends) and the WAS CE pattern type as shown in the following steps, which demonstrate how to use the -Dstorage.dir command to package the WAS CE binary file with the sample.

  1. Create a storage directory. In the following steps, this storage directory is referred to as <storage_dir>.

  2. Download the binary file for the WAS Community Edition (WAS CE) server to <storage_dir>/wasce so that you can add it to the pattern type:

    1. Go to the download page for WAS CE on DeveloperWorks: https://www.ibm.com/developerworks/downloads/ws/wasce/.

    2. Click Download.

    3. Log in using your DeveloperWorks user account.

    4. Download the Server for UNIX to <storage_dir>/wasce. The file name is wasce_setup-3.0.0.x-unix.bin.

      Note: The file name will vary depending on the current version.

  3. Update config.json with the file name for the WAS CE version that you downloaded in Step 7.

    1. Open plugin.com.ibm.wasce-1.0.0.x/plugin/config.json in a text editor.

      Note: The name of this directory will vary depending on the version of the sample that you are using.

    2. Change the file name referenced by config.json to the name of the file that you downloaded in Step 7. You must change the file name in two places:

      "files": [
            "wasce\/wasce_setup-3.0.0.2-unix.bin"
         ],
      
      and

      "parts": [
                     {
                        "part": "parts\/wasce.tgz",
                        "parms": {
                           "binaryFile": "wasce\/wasce_setup-3.0.0.2-unix.bin"
                        }
                     }
                  ]
               }
            ],
      

    3. Save your changes.

    4. Run the build.plugin.xml Ant script in the plugin.com.ibm.wasce-1.0.0.x project to rebuild the plug-ins for the WAS CE sample. To run the Ant script, change directory to plugin.com.ibm.wasce-1.0.0.x from the command line. Type ant -f build.plugin.xml.

      Note: The name of this directory will vary depending on the version of the sample that you are using.

  4. Change directory to the patterntype.wasce.ptype project and run the build.patterntype.xml script. Type ant -f build.patterntype.xml -Dstorage.dir=<storage_dir>. This command builds the pattern type and copies the WAS CE binary file into the pattern type.

    Note: If your files are on a remote site, use the -Dstorage.url parameter. For example, ant -f build.patterntype.xml -Dstorage.url=<remote server URL>.

  5. The .tgz pattern type binary file, wasce.ptype-1.0.0.1.tgz, is now ready for import into the catalog. When you import the pattern type to the system, the WAS CE binary file is installed into the Storehouse. You can see the file by using the Storehouse Browser.

    If you want to change the plug-in or pattern type, you can install a new version of it without using the -Dstorage.dir=storage_dir parameter during your build or including the WAS CE binary file because you use the same WAS CE binary file that is already in the Storehouse. Using this method allows for faster import time for your enhancements. For more information about the -Dstorage.dir and -Dstorage.url parameters, see Pattern type packaging reference.

Results

You built the sample pattern type and plug-ins.


Build the sample plug-ins and pattern types in Eclipse

About this task

These steps show you how to build the sample plug-ins and pattern types in Eclipse.

Procedure

Build the plugin.depends project

  1. Import the PDK plugin.depends project and the sample source projects.

    1. Create a workspace and start Eclipse.
    2. Click File > Import > General > Existing Projects into Workspace. Select Select root directory. Click Browse to select the iwd-pdk-workspace directory where you downloaded and expanded the pdk-<version>.zip file.
    3. Select plugin.depends, and the sample projects to build them. When the import is complete, the projects are added to your workspace.

  2. Build all plug-ins in the workspace. Go to the plugin.depends project and run the build.xml Ant script. To run the Ant script, right-click on the file and select Run As > Ant Build.

Build the Hello sample plug-ins and pattern type

  1. Build the hello pattern type. Go to the patterntype.hello project and run the build.patterntype.xml script. To run the Ant script, right-click on the file and select Run As > Ant Build.

  2. Refresh the patterntype.hello project. A folder named export displays. Go to the export folder. The .tgz pattern type file is located here. It is ready for installation into the catalog.

Build the WAS Community Edition sample plug-ins and pattern type

To use the WAS CE sample, you must first download the WAS CE binary file. Then, you can build the WAS CE plug-ins (from plugin.depends) and the WAS CE pattern type as shown in the following steps, which demonstrate how to use the -Dstorage.dir command to package the WAS CE binary file with the sample.

  1. Create a storage directory. In the following steps, this storage directory is referred to as <storage_dir>.

  2. Download the binary file for the WAS Community Edition (WAS CE) server to <storage_dir>/wasce so that you can add it to the pattern type:

    1. Go to the download page for WAS CE on DeveloperWorks: https://www.ibm.com/developerworks/downloads/ws/wasce/.

    2. Click Download.

    3. Log in using your DeveloperWorks user account.

    4. Download the Server for UNIX to <storage_dir>/wasce. The file name is wasce_setup-3.0.0.x-unix.bin.

      Note: The file name will vary depending on the current version.

  3. Update config.json with the file name for the WAS CE version that you downloaded in Step 6.

    1. Expand the plugin.com.ibm.wasce-1.0.0.x project on the Project Explorer tab in Eclipse.

    2. Expand the plugin folder in the plugin.com.ibm.wasce-1.0.0.x project.

      Note: The name of this directory will vary depending on the version of the sample that you are using.

    3. Double-click config.json to open it in the Config Json Editor. Alternately, you can right-click on the file and select Open With > Config Json Editor.

    4. Select the config.json tab in the editor.

    5. Change the file name referenced in config.json to the name of the file that you downloaded in Step 6. You must change the file name in two places:

      "files": [
            "wasce\/wasce_setup-3.0.0.2-unix.bin"
         ],
      
      and

      "parts": [
                     {
                        "part": "parts\/wasce.tgz",
                        "parms": {
                           "binaryFile": "wasce\/wasce_setup-3.0.0.2-unix.bin"
                        }
                     }
                  ]
               }
            ],
      

    6. Save your changes.

    7. Run the build.plugin.xml Ant script in the plugin.com.ibm.wasce-1.0.0.x project to rebuild the plug-ins for the WAS CE sample. To run the Ant script, right-click on the file and select Run As > Ant Build.

  4. Build the WAS CE pattern type. Go to the patterntype.wasce.ptype project and run the build.patterntype.xml script by using the -Dstorage.dir argument. To run the Ant script, right-click on the file and select Run As > Ant Build. Go to the Main tab, and add -Dstorage.dir=<storage_dir> to the arguments section. Click Run. This command builds the pattern type and copies the WAS CE binary file into the pattern type.

    Note: If your files are on a remote site, use the -Dstorage.url parameter. For example, ant -f build.patterntype.xml -Dstorage.url=<remote server URL>.

  5. Refresh the patterntype.wasce.ptype project. A folder named export displays. Go to the export folder. The .tgz pattern type file is located here. It is ready for installation into the catalog.

    If you want to change the plug-in or pattern type, you can install a new version of it without using the -Dstorage.dir=storage_dir parameter during your build or including the WAS CE binary file because you use the same WAS CE binary file that is already in the Storehouse. Using this method allows for faster import time for your enhancements. For more information about the -Dstorage.dir and -Dstorage.url parameters, see Pattern type packaging reference.

Results

You set up the environment and built the sample pattern type and plug-ins.


Sample: Import and deploy the sample pattern types

You can import the sample pattern types into the catalog and then deploy them. The steps for this task assume that you have:

The hello sample pattern type archive file is in the patterntype.hello/export directory in your workspace. The pattern type archive file is named hello-x.x.x.x.tgz, where x.x.x.x is the version number of the pattern type.

The WAS CE sample pattern type archive file is located in the patterntype.wasce.ptype/export directory in your workspace. The pattern type archive file is named wasce.ptype-x.x.x.x.tgz where x.x.x.x is the version number of the pattern type.


Import the hello sample pattern type

Procedure

  1. Log in to PureApplication System as administrator or as a user with permission to create a pattern type.
  2. Click the Workload Console tab at the top of the Welcome page to open the workload console.
  3. Click Cloud > Pattern Types.
  4. Click the New on the toolbar. The Install a pattern type window displays.
  5. On the Local tab, click Browse. Select the hello-x.x.x.x.tgz file. When the installation process completes, the pattern type, patterntypetest.hello, is displayed in the Pattern Types pane.
  6. Select patterntypetest.hello pattern type from the list. The pattern type details display on the right.
  7. On the detail pane, click Accept to accept the license agreement.
  8. On the detail pane, click Enable to enable the pattern type.

Results

You imported and enabled the hello sample pattern type into the PureApplication System catalog.

What to do next

Deploy a virtual application that is based on the hello sample pattern type. A sample virtual application pattern and virtual application template are provided so that you can easily deploy a virtual application that is based on the hello sample pattern type.


Deploy the hello sample virtual application pattern

The sample virtual application pattern is based on the Hello pattern type and includes example registered users and a sender for messaging.

Procedure

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Click Patterns > Virtual Applications.

  3. In the list of pattern types, select patterntype.hello 2.0. The list of templates displays.

  4. Select the patterntype.hello 2.0 virtual application pattern.

  5. Click Deploy on the toolbar. The Deploy Virtual Application dialog box displays.

  6. Select the IP type and target for the deployment.

  7. Click OK. The virtual application pattern is deployed.

  8. Click Instances > Virtual Applications to view the deployed virtual application instance.

  9. Select the virtual application instance.

  10. In the details pane, find the virtual machine with the HCenter client in the list of virtual machines. It has the name Hello_Plugin-HVM-XXXX, where XXXX is a set of digits. Click the Log link next to the virtual machine to open Log Viewer in a new window.

  11. In Log Viewer, select IWD Agent and then choose ../logs/Hello_Plugin-HVM.XXXX.hello and then console.log. The log includes messages that are sent by users.

    [2011-06-30 04:10:49,592] Hello/HCenter/changed.py 47121262922944 pid=16205 
    INFO Send the request to get a greeting message from Mike to Alice
    [2011-06-30 04:10:49,859] Hello/HCenter/changed.py 47121262922944 pid=16205 
    INFO Receive the message from hello center: Mike, a kind greeting message from Alice has been sent out 

  12. From Instances > Virtual Applications, select the application, and click the Manage button.

  13. Change to the new tab in your browser, which shows the Virtual Application Console.

  14. Choose the Operations tab.

  15. Select Hello role. Use the operation to send a message to the HelloCenter and get a response back.

  16. Select HelloCenter role. Use the operation to edit the list of registered users, which affects response messages.

What to do next

To explore the hello sample virtual application pattern in more detail, you can create a clone of it and then customize the clone before you deploy it.


Deploy the hello sample virtual application template

The sample virtual application pattern is based on the Hello pattern type and includes example registered users and a sender for messaging.

Procedure

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Click Catalog > Virtual Application Templates.

  3. In the list of pattern types, select patterntype.hello 2.0. The list of templates displays.

  4. Select the patterntype.hello 2.0 virtual application template.

  5. Click Deploy on the toolbar. The Deploy Virtual Application dialog box displays.

  6. Specify a list of registered user names, and a Sender name. The Sender name must be in the list of registered user names.

  7. Click OK. The virtual application pattern is deployed.

  8. Click Instances > Virtual Applications to view the deployed virtual application instance.

  9. Select the virtual application instance.

  10. In the details pane, find the virtual machine with the HCenter client in the list of virtual machines. It has the name Hello_Plugin-HVM-XXXX, where XXXX is a set of digits. Click the Log link next to the virtual machine to open Log Viewer in a new window.

  11. In Log Viewer, select IWD Agent and then choose ../logs/Hello_Plugin-HVM.XXXX.hello and then console.log. The log includes messages that are sent by users. For example:

    [2011-06-30 04:10:49,592] Hello/HCenter/changed.py 47121262922944 pid=16205 
    INFO Send the request to get a greeting message from Mike to Alice
    [2011-06-30 04:10:49,859] Hello/HCenter/changed.py 47121262922944 pid=16205 
    INFO Receive the message from hello center: Mike, a kind greeting message from Alice has been sent out 

  12. From Instances > Virtual Applications, select the application, and click the Manage button.

  13. Change to the new tab in your browser, which shows the Virtual Application Console.

  14. Choose the Operations tab.

  15. Select Hello role. Use the operation to send a message to the HelloCenter and get a response back.

  16. Select HelloCenter role. Use the operation to edit the list of registered users, which affects response messages.

What to do next

To explore the sample virtual application pattern in more detail, you can create a clone of it and then customize the clone before you deploy it.


Deploy and running the shared services sample

Before you begin

Ensure that you build and deploy the hello sample before you deploy the shared services sample.

Procedure

Deploy the sample shared service

  1. Click Cloud > Shared Services to open the Shared Services view.

  2. Select Sample Shared Service from the list. If the Sample Shared Service is not listed, ensure that you imported and enabled the hello pattern type as described in the previous section.

  3. Click Deploy to deploy the sample shared service.

  4. Select the configuration to use in the configuration page. For example, enable the REST API on all of the shared service VMs, and set the Management and Service VMs to initalize with 1 VM. Click OK.

  5. Select the target environment profile or target cloud group to match the deployment location in the deployment configuration page. Click OK.

  6. Click Instances > Shared Services to monitor the start of the sample shared service.

Create and deploy a new client by using the shared service client sample component.

  1. Click Patterns > Virtual Applications.

  2. Select patterntype.hello 2.0 in the second box.

  3. Create an application by clicking the +.

  4. Click Start Building on the create application page.

  5. Click the Diagram tab on the Virtual Application Builder page that opens in a new browser tab.

  6. Drag the Shared Service Client Sample from the Other Components palette onto the canvas. Click Save.

  7. Specify a name for the application on the Save Application page. Click OK.

  8. Return to the virtual application patterns page, and refresh the page to see the new application that is displayed in the list of virtual application patterns. Select the application and deploy it by clicking Deploy. When the deployment is complete, the status icon turns green.

  9. Click Instances > Virtual Applications. Select the deployment to load the details view in the right palette. Click Manage to open the management view for the application in a new browser tab.

  10. Select the Operations tab.

  11. Select SharedService-Client.ServiceClient to open the operations page for this role.

  12. Expand the REST Calls section.

  13. Populate the URL field by setting the first parameter to the target resource, and further parameters that are set for more arguments, such as serviceSample/testArgument. Select the required REST method, for example GET. Click Submit.

  14. Monitor the status of the REST call in the Operation Execution Results panel on the bottom of the screen. Wait until the status column shows Done and Success is listed in the Result column. After the call is complete, the client sent a REST request to the shared service and received a response in return.

  15. Confirm that the shared service saw the call by clicking Instances > Shared Services and selecting the Sample Shared Service instance that you deployed. Select the Log link that corresponds to the middleware role where the call was directed to open the Log Viewer in a new browser tab.

  16. Find the trace output for the role where the call was directed, such as ServiceSample2. Expand the section, and then expand the IWD Agent section. Expand the section for the role, such as ServiceSample2. Within trace.log the debug statement shows when REST request was received and processed from the client.


Import the WAS CE sample pattern type

Procedure

  1. Log in to PureApplication System as administrator or as a user with permission to create a pattern type.
  2. Click the Workload Console tab at the top of the Welcome page to open the workload console.
  3. Click Cloud > Pattern Types.
  4. Click the + on the toolbar. The Install a pattern type window displays.
  5. On the Local tab, click Browse. Select the wasce.ptype-x.x.x.x.tgz file. When the installation process completes, the pattern type, WASCE Sample, is displayed in the Pattern Types pane.
  6. Select WASCE Sample pattern type from the list. The pattern type details display on the right.
  7. On the detail pane, click Accept to accept the license agreement.
  8. On the detail pane, click Enable to enable the pattern type.

Results

You imported and enabled the WAS CE sample pattern type into the PureApplication system catalog.

What to do next

Deploy a virtual application that is based on the WAS CE sample pattern type. Sample virtual application patterns are provided so that you can easily deploy a virtual application that is based on the sample pattern type.


Deploy a WAS CE sample virtual application pattern

Procedure

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Click Patterns > Virtual Applications.

  3. In the list of pattern types, select WASCE Sample Pattern . The list of templates displays.

  4. Select the one of the WAS CE Sample Patterns: WAS CE elastic scaling Web application, WAS CE Java. EE Web application, WAS CE sample Web application, or WAS CE Web application with routing policy.

    Note: The description of the application provides the URL for the application after it is deployed, such as http://[IP address]:8080/sample.

  5. Click Deploy on the toolbar. The Deploy Virtual Application dialog box displays.

  6. Select the IP type and target for the deployment.

  7. Click OK. The virtual application pattern is deployed.

  8. Click Instances > Virtual Applications to view the deployed virtual application instance.

  9. Select the virtual application instance.

  10. In the details pane, find the virtual machine with in the list of virtual machines. It has the name Web_Application-wasce.XXXX, where XXXX is a set of digits. The Public IP column lists the IP address for the virtual machine, which you use to form the URL for the application as provided in the application description.

What to do next

To explore the sample virtual application pattern in more detail, you can create a clone of it and then customize the clone before you deploy it.


Deploy a WAS CE sample virtual application pattern with routing policy

Procedure

Start the ELB Proxy service and obtain its IP address

  1. Click Cloud > Shared Services.

  2. Select ELB Proxy Service.

  3. If there is an instance in the cloud, skip to Step 8. Otherwise, click Deploy.

  4. Complete the following information in the Configure and deploy a shared service window:

    ELB instances range

    Specify the minimum and maximum number of ELB instances.

    Enable autowiring

    Select this check box if you want all new virtual application pattern deployments in the cloud group to automatically use elastic load balancing. If you clear this check box, you can manually enable elastic load balancing for specific deployments.

    Virtual Host

    If you selected the Enable autowiring check box, specify the default virtual host for virtual applications in the cloud group.

    Scaling Properties

    In the list, specify options for scaling ELB instances.

    • To automatically scale instances, select CPU based. To disable automatic scaling, select Disable automatic scaling.
    • If you selected an automatic scaling option, specify a threshold range and minimum time to trigger automatic scaling. The default values are 20% processor usage to scale in and 80% processor usage to scale out. The default trigger time is 900 seconds.

  5. Click OK.

  6. Complete the following steps in the Deploy Virtual Application window:

    1. Select the target cloud group.
    2. Select the IP version: IPv4 or IPv6.
    3. Select the cloud group from the menu.

  7. Click OK. Wait for the service to deploy.

  8. Click Instances > Shared Services.

  9. Click ELB Proxy Service - Shared.

  10. Obtain the Public IP for the shared service instance from the Public IP column in the Virtual Machine perspective section of the page.

    Note: If there are multiple ELB Proxy Service instances, you can use the IP address for any of the available instances.

Deploy the WAS CE Sample with Routing Policy

  1. Click Patterns > Virtual Applications.

  2. In the list of pattern types, select WASCE Sample Pattern.

  3. Select the WAS CE Web application with routing policy.

  4. Click Open.

  5. Click the Routing policy that is on canvas.

  6. Obtain the virtual host for the application, such as myhost, from the Virtual host attribute.

    Note: When you apply a routing policy to the virtual application patterns, you use this attribute to set the virtual host for the application.

  7. In the hosts file for the workstation that you use to access the virtual application, map the IP address for the ELB Proxy Service that you obtained in Step 10 to the virtual host. For example:

    <IP that you obtained in Step 10> myhost
    

  8. Return to the system console.

  9. Click Deploy on the toolbar. The Deploy Virtual Application dialog box displays.

  10. Select the IP type and target for the deployment.

  11. Click OK. The virtual application pattern is deployed.

  12. Click Instances > Virtual Applications to view the deployed virtual application instance.

  13. Select the virtual application instance.

  14. In the details pane, find the virtual machine with the name Web_Application-wasce.XXXX, where XXXX is a set of digits. The Middleware Status column has a link to display the endpoint of the application. Use this endpoint to form the URL for the application. In this sample application, the endpoint is sample, so the URL for the application is http://<virtual host that you set in Step 16>/sample.

What to do next

To explore the sample virtual application pattern in more detail, you can create a clone of it and then customize the clone before you deploy it.


Sample: Developing a plug-in and pattern type with Eclipse

You can create a plug-in and pattern type with Eclipse. Download and install the Plug-in Development Kit (PDK) and set up the development environment.

  1. Go to the workspace that you created

  2. Build a single plug-in.

    1. Right-click the build.plugin.xml file in the root of the project and click Run As > Ant Build. The plug-in starts to build.
    2. After the build process, refresh the project and a new folder named export displays. All of the build artifacts are listed in the export folder. The plug-in package is in the root of export folder.

  3. Build all plug-ins in the workspace.

    1. Right-click the build.plugin.xml file in the root of the plugin.depends project and click Run As > Ant Build. The plug-in starts to build.
    2. After the build process, refresh the project. A new folder named image displays in the subfolder plugins, where all of the built plug-in packages are located.

  4. Build a single pattern type. You must build all plug-ins in the workspace first.

    1. Right-click the build.patterntype.xml file in the root of the pattern type project. In the Hello sample, the pattern type project is patterntype.hello. Click Run As > Ant Build. The plug-in starts to build.
    2. After the build process, refresh the project. All of the build artifacts are listed in the export folder. The plug-in package is in the root of patterntypes folder.

Import a plug-in or pattern type into the catalog to test it.


Sample: Creating a virtual application with the patterntypetest.hello plug-in

In this Samples topic, you use the patterntypetest.hello plug-in to create an application deployed to the cloud.

Set up your development environment and Import the sample pattern type.

To create an application with the plug-ins, complete the following steps:

  1. Click the Workload Console tab at the top of the Welcome page to open the workload console.

  2. Click Patterns > Virtual Applications. The Virtual Application Patterns pane displays. Ensure that the patterntypetest.hello 2.0 is selected.

  3. Select patterntypetest.hello 2.0 from the patterns list.

  4. Click the New on the toolbar. The Create Application dialog box displays.

  5. Click Start Building. The Virtual Application Builder displays in a new window.

  6. In the Other Components section in the component list, drag the Hello and HelloCenter components into the middle section of the canvas. Name the application in the Virtual Application pane.

  7. Select the List View tab, next to the Diagram tab.

  8. Create the sample_userlist.json file with the following content ["Mike","Alice","Joe"].

  9. Click the HelloCenter component. In the right attributes view, type sample in the Hello Center Name attribute field. Upload the created sample_userlist.json file in the Registered User List.

  10. In the Diagram tab, click the Hello component. In the right attributes view, type one of the following names: "Mike","Alice","Joe".

  11. Link Hello to HelloCenter. Click the link and type any name in the receiver of greeting message attribute field.

  12. To review all of the settings that you configured in the previous steps, click the List View tab on the left pane.

  13. Click Save and return to the Virtual Application Patterns page.

  14. Refresh the Virtual Application Patterns page to display the new application.

    The application displays in the list that is on the left.

  15. Select the application and click the Deploy icon in the upper right pane.

    The Deploy Virtual Application window displays. Specify the IP type and the target for the deployment, and then click OK.

    When the deployment is complete, the status icon turns green.

  16. Click Instances > Virtual Applications to view the deployed virtual application instance.

  17. Select the virtual application instance.

  18. In the details pane, find the virtual machine with the HCenter client in the list of virtual machines. It has the name Hello_Plugin-HVM-XXXX, where XXXX is a set of digits. Click the Log link next to the virtual machine to open Log Viewer in a new window.

  19. In Log Viewer, select IWD Agent and then choose ../logs/Hello_Plugin-HVM.XXXX.hello and then console.log. The log includes messages that are sent by users.

    [2011-06-30 04:10:49,592] Hello/HCenter/changed.py 47121262922944 pid=16205 
    INFO Send the request to get a greeting message from Mike to Alice
    [2011-06-30 04:10:49,859] Hello/HCenter/changed.py 47121262922944 pid=16205 
    INFO Receive the message from hello center: Mike, a kind greeting message from Alice has been sent out 
    

  20. From Instances > Virtual Applications, select the application, and click the Manage button.

  21. Change to the new tab in your browser, which shows the Virtual Application Console.

  22. Choose the Operations tab.

  23. Select Hello role. Use the operation to send a message to the HelloCenter and get a response back.

  24. Select HelloCenter role. Use the operation to edit the list of registered users, which affects response messages.

You created a virtual application and deployed it to PureApplication System.

Monitor the virtual application instance.


Sample: Creating a plug-in project from the command line

You can create a plug-in project from the command line.

This topic is an example of how to create a plug-in project using command-line tools. Note that the preferred method to create plug-in projects is by using Eclipse. For more information, see Develop plug-ins in Eclipse.

  1. Open the command-line tool.
  2. cd to the plugin.depends project directory in your workspace.
  3. Set the ANT_HOME environment variable. You can use Ant in your Eclipse installation at eclipse/plugins/org.apache.ant_1.7*. You can also invoke this Ant script from Eclipse. To do this, right-click create.plugin.project.xml in the plugin.depends project. Select Run As > Ant Build. Click the Main tab. In the argument section, type the various -Dproject.name=jp1 values that are provided in this sample.

Continue with this task by completing the following steps:

  1. Create a template plug-in project as follows:

    ant -Dproject.name=tp1 -Dplugin.name=a.b.c.template -f create.plugin.project.xml
    

    The project.name property is optional and if it is not specified, it will default to the value of the plugin.name.

  2. Create a Java. plug-in project as follows:

    1. Create a Java plug-in project that contains no package name:

      (.java assumed on java classname)
       
      ant -Dproject.name=jp1 -Dplugin.name=a.b.c.java -Djava.classname=MyPlugin -f create.plugin.project.xml 
      
      

    2. Create a Java plug-in project that contains a package name:

      ant -Dproject.name=jp2 -Dplugin.name=a.b.c.java -Djava.classname=a.b.c.MyPlugin -f create.plugin.project.xml 
      

  3. Verify that the command is successful. Import the newly created projects into your workspace.

    To build the plug-in projects, for example, jp1, you can find build.plugin.xml in project jp1. Right-click build.plugin.xml and issue Run As > Ant Build with the goal clean, publish selected. The equivalent Ant command is to issue the following command in the project jp1 directory:

    ant .f build.plugin.xml clean publish 
    
    The plug-in a.b.c.java-<version>.tgz is created in the export directory.


Sample: Building a single plug-in and pattern type with the command-line tools

You can create a plug-in and pattern type with the command-line tools. This task assumes that you have the following installed:

  1. Go to the workspace that you created

  2. Go to the root folder of target plug-in project.

  3. Type this command to build a single plug-in:

    <ant command path> -f build.plugin.xml  
    
    The building information displays in the console.

  4. Go to the export folder of the plug-in project. This folder is generated by step 3. Locate the plug-in package, which is a .tgz file.

  5. Go to the root of the plugin.depends project.

  6. Type the following command to build all plug-ins in this workspace:

    <ant command path> -f build.xml
    
    This command builds the plug-ins in this workspace one at a time. After the script starts, go to the image/plugins folder of the plugin.depends project to check all of the built plug-in packages.

  7. Go to the root of the pattern type project, patterntypetest.hello, and type the following command:

    <ant command path> -f build.patterntype.xml
    
    After the script starts, go to the root of the export folder of the patterntypetest.hello project to check the built pattern type package, which is a .tgz file.