Program guide > Programming with system APIs and plug-ins > Plug-ins for evicting cache objects



Plug in a pluggable evictor

Since evictors are associated with BackingMaps, use the BackingMap interface to specify the pluggable evictor.


Optional pluggable evictors

The default TTL evictor uses an eviction policy that is based on time, and the number of entries in the BackingMap has no affect on the expiration time of an entry. Use an optional pluggable evictor to evict entries based on the number of entries that exist instead of based on time.

The following optional pluggable evictors provide some commonly used algorithms for deciding which entries to evict when a BackingMap grows beyond some size limit.

The BackingMap informs an evictor as entries are created, modified, or removed in a transaction. The BackingMap keeps track of these entries and chooses when to evict one or more entries from the BackingMap instance.

A BackingMap instance has no configuration information for a maximum size. Instead, evictor properties are set to control the evictor behavior. Both the LRUEvictor and the LFUEvictor have a maximum size property that is used to cause the evictor to begin to evict entries after the maximum size is exceeded. Like the TTL evictor, the LRU and LFU evictors might not immediately evict an entry when the maximum number of entries is reached to minimize impact on performance.

If the LRU or LFU eviction algorithm is not adequate for a particular application, you can write the own evictors to create the eviction strategy.


Use optional pluggable evictors

To add optional pluggable evictors into the BackingMap configuration, you can use programmatic configuration or XML configuration, as described in the following section.


Programmatically plug in a pluggable evictor

Because evictors are associated with BackingMaps, use the BackingMap interface to specify the pluggable evictor. The following code snippet is an example of specifying a LRUEvictor evictor for the map1 BackingMap and a LFUEvictor evictor for the map2 BackingMap instance:

plugging in an evictor programmatically
import com.ibm.websphere.objectgrid.ObjectGridManagerFactory;
import com.ibm.websphere.objectgrid.ObjectGridManager;
import com.ibm.websphere.objectgrid.ObjectGrid;
import com.ibm.websphere.objectgrid.BackingMap;
import com.ibm.websphere.objectgrid.plugins.builtins.LRUEvictor;
import com.ibm.websphere.objectgrid.plugins.builtins.LFUEvictor;
ObjectGridManager ogManager = ObjectGridManagerFactory.getObjectGridManager();
ObjectGrid og = ogManager.createObjectGrid( "grid" );
BackingMap bm = og.defineMap( "map1" );
LRUEvictor evictor = new LRUEvictor();
evictor.setMaxSize(1000);
evictor.setSleepTime( 15 );
evictor.setNumberOfLRUQueues( 53 );
bm.setEvictor(evictor);
bm = og.defineMap( "map2" );
LFUEvictor evictor2 = new LFUEvictor();
evictor2.setMaxSize(2000);
evictor2.setSleepTime( 15 );
evictor2.setNumberOfHeaps( 211 );
bm.setEvictor(evictor2);

The preceding snippet shows an LRUEvictor evictor being used for map1 BackingMap with an approximate maximum number of entries of 53,000 (53 * 1000). The LFUEvictor evictor is used for the map2 BackingMap with an approximate maximum number of entries of 422,000 (211 * 2000). Both the LRU and LFU evictors have a sleep time property that indicates how long the evictor sleeps before waking up and checking to see if any entries need to be evicted. The sleep time is specified in seconds. A value of 15 seconds is a good compromise between performance impact and preventing BackingMap from growing too large. The goal is to use the largest sleep time possible without causing the BackingMap to grow to an excessive size.

The setNumberOfLRUQueues method sets the LRUEvictor property that indicates how many LRU queues the evictor uses to manage LRU information. A collection of queues is used so that every entry does not keep LRU information in the same queue. This approach can improve performance by minimizing the number of map entries that need to synchronize on the same queue object. Increasing the number of queues is a good way to minimize the impact that the LRU evictor can cause on performance. A good starting point is to use ten percent of the maximum number of entries as the number of queues. Using a prime number is typically better than using a number that is not prime. The setMaxSize method indicates how many entries are allowed in each queue. When a queue reaches its maximum number of entries, the least recently used entry or entries in that queue are evicted the next time that the evictor checks to see if any entries need to be evicted.

The setNumberOfHeaps method sets the LFUEvictor property to set how many binary heap objects the LFUEvictor uses to manage LFU information. Again, a collection is used to improve performance. Using ten percent of the maximum number of entries is a good starting point and a prime number is typically better than using a number that is not prime. The setMaxSize method indicates how many entries are allowed in each heap. When a heap reaches its maximum number of entries, the least frequently used entry or entries in that heap are evicted the next time that the evictor checks to see if any entries need to be evicted.


XML configuration approach to plug in a pluggable evictor

Instead of using various APIs to programmatically plug in an evictor and set its properties, an XML file can be used to configure each BackingMap as illustrated in the following sample:

plugging in an evictor using XML
<?xml version="1.0" encoding="UTF-8"?>
<objectGridConfig xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://ibm.com/ws/objectgrid/config ../objectGrid.xsd"
 xmlns="http://ibm.com/ws/objectgrid/config">
<objectGrids>
   
<objectGrid name="grid">
       
<backingMap name="map1" ttlEvictorType="NONE" pluginCollectionRef="LRU" />
       
<backingMap name="map2" ttlEvictorType="NONE" pluginCollectionRef="LFU" />
   
</objectGrid>
</objectGrids>
<backingMapPluginCollections>
   
<backingMapPlugincollection id="LRU">
       
<bean id="Evictor" className="com.ibm.websphere.objectgrid.plugins.builtins.LRUEvictor">
           
<property name="maxSize" type="int" value="1000" description="set max size 
                            for each LRU queue" />
           
<property name="sleepTime" type="int" value="15" description="evictor 
                            thread sleep time" />
           
<property name="numberOfLRUQueues" type="int" value="53" description="set number 
                            of LRU queues" />
       
</bean>
   
</backingMapPluginCollection>
   
<backingMapPluginCollection id="LFU">
       
<bean id="Evictor"    className="com.ibm.websphere.objectgrid.plugins.builtins.LFUEvictor">
           
<property name="maxSize" type="int" value="2000" description="set max size for each LFU heap" />
           
<property name="sleepTime" type="int" value="15" description="evictor thread sleep time" />
           
<property name="numberOfHeaps" type="int" value="211" description="set number of LFU heaps" />
       
</bean>
   
</backingMapPluginCollection>
</backingMapPluginCollections>
</objectGridConfig>


Memory-based eviction

All built-in evictors support memory-based eviction that can be enabled on BackingMap interface by setting the evictionTriggers attribute of BackingMap to "MEMORY_USAGE_THRESHOLD". For more information about how to set the evictionTriggers attribute on BackingMap, see BackingMap interface and eXtreme Scale configuration reference.

Memory-based eviction is based on heap usage threshold. When memory-based eviction is enabled on BackingMap and the BackingMap has any built-in evictor, the usage threshold is set to a default percentage of total memory if the threshold has not been previously set.

To change the default usage threshold percentage, set the memoryThresholdPercentage property on container and server property file for eXtreme Scale server process.

To set the target usage threshold on an eXtreme Scale client process, you can use the MemoryPoolMXBean. See also: containerServer.props file and Starting eXtreme Scale server processes.

During run time, if the memory usage exceeds the target usage threshold, memory-based evictors start evicting entries and try to keep memory usage below the target usage threshold. However, there is no guarantee that the eviction speed is fast enough to avoid a potential out of memory error if the system run time continues to quickly consume memory.


Parent topic:

Plug-ins for evicting cache objects


Related concepts

TimeToLive (TTL) evictor

Write a custom evictor

Plug-in evictor performance best practices


+

Search Tips   |   Advanced Search