Product overview > Scalability overview


Partitioning


Overview

Define the number of partitions in the deployment policy.

Each partition hosts the complete data for individual entries.

Applications that require transactional guarantees across large sets of data do not scale and cannot be partitioned effectively. WXS does not currently support two-phase commit across partitions.

The number of container servers that can be used to scale out a single application...

(Number_Partitions*(1 + Number_Replicas))

Each partition is made up of a primary shard and the configured number of replica shards.


Use partitions

A data grid can scale up to the product of the number of partitions times the number of shards per partition.

For example, if you have 16 partitions and each partition has one primary and one replica, or two shards, then you can potentially scale to 32. In this case, one shard is defined for each JVM.

Choose a reasonable number of partitions based on the expected number of JVMs that you are likely to use. Each shard increases processor and memory usage for the system. The system is designed to scale out to handle this overhead in line with how many server JVMs are available.

Applications should not use thousands of partitions if the application runs on a data grid of four container server JVMs. The application should be configured to have a reasonable number of shards for each container server JVM. For example, an unreasonable configuration is 2000 partitions with two shards that are running on four container JVMs. This configuration results in 4000 shards that are placed on four container JVMs or 1000 shards per container JVM.

A better configuration would be under 10 shards for each expected container JVM. This configuration still gives the possibility of allowing for elastic scaling that is ten times the initial configuration while keeping a reasonable number of shards per container JVM.

Consider this scaling example: You currently have six physical servers with two container JVMs per physical server. You expect to grow to 20 physical servers over the next three years. With 20 physical servers, you have 40 container server JVMs, and choose 60 to be pessimistic. You want four shards per container JVM. You have 60 potential containers, or a total of 240 shards. If you have a primary and replica per partition, then you want 120 partitions. This example gives you 240 divided by 12 container JVMs, or 20 shards per container JVM for the initial deployment with the potential to scale out to 20 computers later.


ObjectMap and partitioning

With the default FIXED_PARTITION placement strategy, maps are split across partitions and keys hash to different partitions. The client does not need to know to which partition the keys belong. If a mapSet has multiple maps, the maps should be committed in separate transactions.


Entities and partitioning

Entity manager entities have an optimization that helps clients that are working with entities on a server. The entity schema on the server for the map set can specify a single root entity. The client must access all entities through the root entity. The entity manager can then find related entities from that root in the same partition without requiring the related maps to have a common key. The root entity establishes affinity with a single partition. This partition is used for all entity fetches within the transaction after affinity is established. This affinity can save memory because the related maps do not require a common key. The root entity must be specified with a modified entity annotation as shown in the following example:

@Entity(schemaRoot=true)

Use the entity to find the root of the object graph. The object graph defines the relationships between one or more entities. Each linked entity must resolve to the same partition. All child entities are assumed to be in the same partition as the root. The child entities in the object graph are only accessible from a client from the root entity. Root entities are always required in partitioned environments when using an eXtreme Scale client to communicate to the server. Only one root entity type can be defined per client. Root entities are not required when using Extreme Transaction Processing (XTP) style ObjectGrids, because all communication to the partition is accomplished through direct, local access and not through the client and server mechanism.


Parent topic:

Scalability overview


Related concepts

Data grids, partitions, and shards
Placement and partitions
Single-partition and cross-data-grid transactions
Scaling in units or pods
Sizing memory and partition count calculation


Related tasks

Configure deployment policies

Related reference

Deployment policy descriptor XML file


+

Search Tips   |   Advanced Search