Administration guide > Configure the deployment environment > Configuring deployment policies > Controlling shard placement with zones


Zone-preferred routing

Overview

Zone-preferred routing defines how WXS directs transactions to zones, giving WXS clients the capability to specify a preference for a particular zone or set of zones.


Requirements for zone-preferred routing

Per-container partition placement is necessary to use zone-preferred routing.

This placement strategy is a good fit for applications that are storing session data in the ObjectGrid. The default partition placement strategy for WXS is fixed-partition. Keys are hashed at transaction commit time to determine which partition houses the key-value pair of the map when using fixed-partition placement.

Per-container placement assigns the data to a random partition when the transaction commits time through the SessionHandle object. You must be able to reconstruct the SessionHandle object to retrieve your data from the ObjectGrid.

Use zones to have more control over where primary shards and replica shards are placed in your domain. A multi-zone deployment is advantageous when the data is in multiple physical locations. Geographically separating primaries and replicas is a way to ensure that catastrophic loss of one datacenter does not affect the availability of the data.

When data is spread across multiple zones, it is likely that clients are also spread across the topology. Routing clients to their local zone or data center has the obvious performance benefit of reduced network latency. Route clients to local zones or data centers when possible.


Configure the topology for zone-preferred routing

Consider the following scenario. You have two data centers: Chicago and London. To minimize response time of clients, you want clients to read and write data to their local data center.

Primary shards must be placed in each data center so that transactions can be written locally from each location. Clients must be aware of zones to route to the local zone. Per-container placement locates new primary shards on each container that is started. Replicas are placed according to zone and placement rules that are specified by the deployment policy. By default, a replica is placed in a different zone than its primary shard.

Consider the following deployment policy for this scenario.

<?xml version="1.0" encoding="UTF-8"?>
<deploymentPolicy xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd"
    xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy">

    <objectgridDeployment objectgridName="universe">

        <mapSet name="mapSet1" 
                placementStrategy="PER_CONTAINER"
                numberOfPartitions="3" 
                maxAsyncReplicas="1">

            <map ref="planet" />
        </mapSet>

    </objectgridDeployment>

</deploymentPolicy>

Each container that starts with the deployment policy receives three new primaries. Each primary has one asynchronous replica. Start each container with the appropriate zone name.

startOgServer.sh s1 
                 -objectGridFile ../xml/universeGrid.xml 
                 -deploymentPolicyFile ../xml/universeDp.xml 
                 -catalogServiceEndpoints MyServer1.company.com:2809 
                 -zone Chicago

startOgServer.sh s2 
                 -objectGridFile ../xml/universeGrid.xml 
                 -deploymentPolicyFile ../xml/universeDp.xml 
                 -catalogServiceEndpoints MyServer1.company.com:2809 
                 -zone London

If the containers are running in WAS, create a node group and name it with the prefix ReplicationZone. Servers that are running on the nodes in these node groups are placed in the appropriate zone. For example, servers running on a Chicago node might be in a node group named ReplicationZoneChicago.

Primary shards for the Chicago zone have replicas in the London zone. Primary shards for the London zone have replicas in the Chicago zone.

Primaries and replicas in zones

Set the preferred zones for the clients. Provide a client properties file to the client JVM. Create a file named objectGridClient.properties and ensure that this file is in the class path.

Include the preferZones property in the file. Set the property value to the appropriate zone. Clients in Chicago must have the following value in the objectGridClient.properties file:

preferZones=Chicago

The property file for London clients must contain the following value:

preferZones=London

This property instructs each client to route transactions to its local zone if possible. The topology asynchronously replicates data that is inserted into a primary shard in the local zone into the foreign zone.


Use the SessionHandle to route to the local zone

The per-container placement strategy does not use a hash-based algorithm to determine the location of the key-value pairs in the ObjectGrid. Your ObjectGrid must use SessionHandle objects to ensure that transactions are routed to the correct location when you are using this placement strategy. When a transaction is committed, a SessionHandle is bound to the Session if one has not already been set. The SessionHandle can also be bound to the Session by calling the Session.getSessionHandle method before committing the transaction. The following code snippet shows a SessionHandle being bound before committing the transaction.

Session ogSession = objectGrid.getSession();

// binding the SessionHandle
SessionHandle sessionHandle = ogSession.getSessionHandle();

ogSession.begin();
ObjectMap map = ogSession.getMap("planet");
map.insert("planet1", "mercury");

// tran is routed to partition specified by SessionHandle
ogSession.commit();

Assume that the prior code was running on a client in the Chicago data center. The preferZones attribute is set to Chicago for this client. As a result, the deployment would route transactions to one of the primary partitions in the Chicago zone: partition 0, 1, 2, 6, 7, or 8.

The SessionHandle object provides a path back to the partition that is storing this committed data. The SessionHandle object must be reused or reconstructed and set on the Session to get back to the partition containing the committed data.

ogSession.setSessionHandle(sessionHandle);
ogSession.begin();

// value returned will be "mercury"
String value = map.get("planet1");
ogSession.commit();

The transaction in this code reuses the SessionHandle that was created during the insert transaction. The get transaction then routes to the partition that holds the inserted data. Without the SessionHandle, the transaction cannot retrieve the inserted data.


How container and zone failures affect zone-based routing

Generally, a client with the preferZones property set routes all transactions to the specified zone or zones. However, the loss of a container results in the promotion of a replica shard to a primary shard. A client that was previously routing to partitions in the local zone must retrieve previously inserted data from the remote zone.

Consider the following scenario. A container in the Chicago zone is lost. It previously contained primaries for partitions 0, 1, and 2. The new primary shards for these partitions are then placed in the London zone since London hosted the replicas for these partitions.

Any Chicago client that is using a SessionHandle object that points to one of the failed-over partitions now reroutes to London. Chicago clients that are using new SessionHandle objects route to Chicago-based primaries.

Similarly, if the entire Chicago zone is lost, all replicas in the London zone promote to primaries. In this case, all Chicago clients route their transactions to London.


Parent topic:

Controlling shard placement with zones


+

Search Tips   |   Advanced Search