Directory Server, Version 6.1

 

Distributed directories

A distributed directory is a directory environment in which data is partitioned across multiple directory servers. A distributed directory must have a collection of machines including relational database management (RDBM) servers holding data, and proxy servers managing the topology.

 

The proxy server

Proxy server is a special type of IBM® Tivoli® Directory Server that provides request routing, load balancing, fail over, distributed authentication and support for distributed/membership groups and partitioning of containers. Most of these functions are provided in a new backend, the proxy backend. The proxy server does not have an RDBM backend and cannot take part in replication.

A directory proxy server sits at the front-end of a distributed directory and provides efficient routing of user requests thereby improving performance in certain situations, and providing a unified directory view to the client. It can also be used at the front-end of a server cluster for providing fail over and load balancing.

The proxy server routes read and write requests differently based on the configuration. Write requests for a single partition are directed to the single primary write server. Peer servers are not used to avoid conflicts. Read requests are routed in a round robin manner to balance the load. However, if high consistency is enabled read requests are routed to the primary write server.

The proxy server also provides data support for groups and ACLs which are not affected by partitioning, and support for partitioning of flat namespaces. Originally, the proxy server was intended to solve the directory partitioning issue, especially across a flat namespace. However, all the features of the proxy are designed so that it can be used as an LDAP-aware load balancer as well.

The proxy server is configured with connection information to connect to each of the backend servers for which it is proxying. The connection information comprises of host address, port number, bind DN, credentials and a connection pool size. Each of the back-end servers is configured with the DN and credentials that the proxy server uses to connect to it. The DN must be a member of the global admin group, local admin group with dirData authority, or the primary administrator.

  • If you use the ddsetup tool, the configuration information must be in sync with the ddsetup information. See the IBM Tivoli Directory Server Command Reference for more information about ddsetup.

  • The proxy server does not support null based search and gives an operations error if null based search is fired against proxy server.

Finally, the proxy server is configured with its own schema. You need to ensure that the proxy server is configured with the same schema as the back-end servers for which it is proxying. The proxy server must also be configured with partition information.

The server uses the same default configuration file whether it is configured as a directory server or a proxy server. However, when the server is configured as a proxy server, the configuration settings for the features that the proxy server does not support are ignored. Given below is a list of entries in the configuration file that are ignored by the proxy server:

  • cn=Event Notification, cn=Configuration

  • cn=Persistent Search, cn=Configuration

  • cn=RDBM Backends, cn=IBM Directory, cn=Schemas, cn=Configuration

  • cn=Replication, cn=configuration

  • cn=Bulkload, cn=Log Management, cn=Configuration

  • cn=DB2CLI, cn=Log Management, cn=Configuration

For the entry "cn=Front End, cn=configuration", environment variables set under this entry will be supported by proxy. The environment variables supported by the proxy include the following:

Table 23. Environment variables supported by proxy
Variable Description
PROXY_CACHE_GAG_PW Specifies if password caching is enabled or disabled. The proxy server has the ability to locally cache the passwords of global administrators. If password policy is enabled, caching of the Global Admin Group Member passwords is disabled. If password policy is disabled, the caching of Global Admin Group Members is enabled. PROXY_CACHE_GAG_PW environment variable can override this default behavior. PROXY_CACHE_GAG_PW set to YES will enable password caching. PROXY_CACHE_GAG_PW set to any other value will disable password caching. When the env variable is unset the default behavior is governed by the password policy setting.
PROXY_GLOBAL_GROUP_PERIOD Interval after which the proxy interval thread wakes up. The default value for this variable is 30 seconds.
PROXY_USE_SINGLE_SENDER Specifies if a single sender thread is used for the operations. By default this is false.
PROXY_RECONNECT_TIME Specifes the interval after which the proxy tries to reconnect to a backend server that has gone down. By default this is 5 seconds.
PROXY_HEALTHCHECK_OLIMIT Specifies the limit on the number of outstanding results. By default this is set to 1.
LDAP_LIB_WRITE_TIMEOUT Specifies the time (in seconds) to wait for a socket to be write ready

The proxy server supports some features of TDS while at the same time there are some features that are not supported by proxy. The list of features that are supported by the proxy server are given below:

  • Log access extended operations.

  • Dynamic configuration of the supported attributes

  • Server start stop

  • TLS

  • Unbind of a bound dn

  • Dynamic trace

  • Attribute type extended operation

  • User type extended operation

  • Auditing of source ip control

  • Server administration control

  • Entry check sum

  • Entry uuid

  • Filter acls

  • Admin group delegation

  • Denial of service prevention

  • Admin daemon auditing

  • Dynamic groups

  • Monitor operation counts

  • Monitor logging counts

  • Connection monitor active workers

  • Monitor tracing

  • SSL FIps mode

  • Modify dn as long as the entry rename does not move the entry across partitions.

  • Multiple instances

  • AES password encryption

  • Admin password policy

  • Locate entry extended operation

  • Resume role extended operation

  • ldap get file

  • Limit number of attribute values

  • Audit performance - Performance auditing is supported for proxy. The following performance info fields for each audit record are valid for proxy. The RDBM lock wait time will always be 0 for a proxy server:

    • Operation response time

    • Time spent on work Q

    • Client I/O time

  • Digest MD-5 Binds

  • Admin roles

  • Preoperation plugins

  • Global Admin Group

The list of features not supported by proxy is given below:

  • Event notification

  • Replication management extended operations

  • Group evaluation extended operation

  • Account Status extended operation

  • Paged and Sorted Searches

  • Subtree delete

  • Proxy authorization control

  • Group authorization control

  • Omit group Referential integrity

  • Unique Attributes

  • ibm-allmembers search

  • Transactions

    Transactions are supported but only if all entries reside on a single directory server.

  • Effective password policy

  • Online backup extended operation

  • Password prebind extended operation

  • Password post bind extended operation

  • Post Operation plugins

 

Splitting data within a subtree based on a hash of the RDN using a proxy server

In this setup, three servers have their data split within a "container" (under some entry in the directory tree). Because the proxy server handles the routing of requests to the appropriate servers, no referrals are used. Client applications need only be aware of the proxy server. The client applications never have to authenticate with servers A, B, or C.

The illustration shows a single proxy server distributing the data for the subtree o=sample across three servers. ServerA has a partition value or hash value of 1, ServerB has a hash value of 2 and ServerC has a hash value of 3.

Data is split evenly across the directories by hashing on the RDN just below the base of the split. In this example the data within the subtree is split based on the hash value of the RDN. Hashing is only supported on the RDN at one level in the tree under a container. Nested partitions are allowed. In the case of a compound RDN the entire normalized compound RDN is hashed. The hash algorithm assigns an index value to the DN of each entry. This value is then used to distribute the entries across the available servers evenly.

Notes:

  1. The parent entries across multiple servers must remain synchronized. It is the administrator's responsibility to maintain the parent entries.

  2. ACLs must be defined at the partition base level on each server.

The number of partitions and the partition level are determined when the proxy server is configured, and when the data is split. There is no way to expand or reduce the topology without repartitioning.

The hash value enables the proxy server to locate and retrieve entries

For example: Data under o=sample is split across three servers. This means that the proxy server is configured to hash RDN values immediately after o=sample among 3 servers, or "buckets". This also means that RDN values more than 1 away from o=sample will map to the same server as values immediately after o=sample. For example, cn=test,o=sample and cn=user1,cn=test,o=sample will always map to the same server. Server A holds all the entries with a hash value of 1, server B holds all the entries with a hash value of 2, and server C holds all the entries with a hash value of 3. The proxy server receives an add request for an entry with DN cn=Test,o=sample. The proxy server then uses the configuration information (specifically that there are 3 partitions with a base at o=sample) and the cn=Test RDN as inputs to the internal hashing function. If the function returns 1, the entry resides on Server A and the add request is forwarded there.

Entry hashing is based on the RDN of the entry. Only the portion of the DN immediately to the left of the split point is used by the hash algorithm. Also, the whole normalized string is used for the hash, not just the value. For example, if our split point is o=sample and this is split into three partitions, then the following occurs:

  • cn=example,o=sample hashes to a single server, let's say serverA. This is determined by hashing cn=example into one of three partitions.

  • dc=example, o=sample hashes to a different server, let's say serverB. This is determined by hashing dc=example.

  • cn=foo,cn=example,o=sample hashes to serverA. This is because only cn=example is used for the hash algorithm. All entries beneath cn=example,o=sample resolve to the same server as cn=example,o=sample.

It is essential to note that when using a 6.1 proxy server with 6.0 backend servers the cn=pwdpolicy subtree must be configured as a split point. However, a 6.1 proxy server using 6.1 backend servers should not have the cn=pwdpolicy subtree.

 

DN Partition plug-in

The TDS server provides the option to load customer written partitioning function during server runtime. The existing hash algorithm that is used to partition data is statically linked by the TDS server. However, with DN partitioning function implemented as a plug-in, the hash algorithm can be easily replaced resulting in the TDS server being more flexible and adaptive.

The existing hash algorithm however remains as the default partitioning plug-in. It is loaded during server startup if no customized code is available. This feature incorporates an attribute called ibm-slapdDNPartitionPlugin in the objectclass ibm-slapdProxyBackend. It is a required and single-valued attribute which means that only one DN partitioning plug-in is allowed for a Proxy Server Back-end. The value of the attribute consists of a path using which a customized DN partitioning module is loaded and an initialization function using which a user provided DN partitioning function is registered.

The initialization function is called when the DN partitioning plug-in is loaded during Proxy Server startup time. By loading the dynamically loadable plug-in module, the functions defined in the module get assigned with function addresses by the loader. By executing this initialization function, the address of the partitioning function registered in the initialization function gets stored in the Proxy Server Back-end. The registered DN partitioning function, later on, is called by Proxy Router to route requests to target servers.

  • The DIT that is populated by Proxy Server using one partitioning algorithm will be inaccessible by the Proxy Server using a different partitioning algorithm. Once the DIT is populated, the partition plug-in should not be changed. If we need to change the partition plug-in, then the data should be reloaded. Data loaded for TDS Version 6.0 will not work with a DN partitioning plug-in in Version 6.1 unless the default plug-in is used in Version 6.1.

  • It is essential to note that to you use a customized plugin, it must be set before running the ddsetup command.

 

Using the command line

To modify the ibm-slapdDNPartitionPlugin attribute and to add a customized plug-in, issue the following command:

idsldapmodify -D <adminDN> -w <adminPW> -i <filename>

where <filename> contains:

dn: cn=ProxyDB, cn=Proxy Backends, cn=IBM Directory, cn=Schemas, cn=Configuration 
changetype: modify
replace: ibm-slapdDNPartitionPlugin 
ibm-slapdDNPartitionPlugin: <customized DN partitioning plug-in library> 
                            <plug-in initialization function>

 

The distributed directory setup tool

The Distributed Directory Setup (ddsetup) tool splits an LDIF file into separate LDIF files that can be loaded onto individual directory servers. The ddsetup tool can be used in a non-distributed environment to merely split up an LDIF file into separate pieces. The user has the option of splitting the DIT at one or more subtrees, specifying the split points by DN.

The ddsetup tool uses the proxy server's ibmslapd.conf file to partition entries. The data is split using the partition algorithm specified in ibm-slapdDNPartitionPlugin attribute of the configuration file.

The ddsetup tool does not enforce objectclass schema check since it is designed for optimal performance.

 

Adding and partitioning the data

Entries are added using either the Web Administration Tool (see Adding an entry for additional information or the idsldapadd and idsldapmodify command information in the IBM Tivoli Directory Server version 6.1 Command Reference).

In this scenario you are going to add the data contained in the sample.ldif file that is included with the IBM Tivoli Directory Server. Issue the following command:

idsldapadd -D cn=user1,cn=ibmpolicies -w mysecret -h <proxyhostname> -p 389 
        -i <IDS_LDAP_HOME>/examples/sample.ldif

Where IDS_LDAP_HOME varies by operating system platform:

  • AIX® operating systems - /opt/IBM/ldap/V6.1

  • HP-UX operating systems - /opt/IBM/ldap/V6.1

  • Linux® operating systems - /opt/ibm/ldap/V6.1

  • Solaris operating systems - /opt/IBM/ldap/V6.1

  • Windows® operating systems - <local_drive>:\Program Files\IBM\LDAP\V6.1 (This is the default install location. The actual IDS_LDAP_HOME is determined during installation.)

If you have an existing database with a large number of entries, we need to export the entries to an LDIF file. See the idsdb2ldif command information in the IBM Tivoli Directory Server version 6.1 Command Reference for more information on how to do this:

  1. To create the LDIF file, issue the command:
    idsdb2ldif  -o mydata.ldif -s o=sample  -I <instance_name>

  2. Issue the command:
    ddsetup -I proxy -B "o=sample" -I mydata.ldif

    where

    proxy: Is the proxy server instance

    Attention: When you create a new directory server instance, be aware of the information that follows. If you want to use a distributed directory, cryptographically synchronize the server instances to obtain the best performance.

    When partitioning an existing directory containing AES-formatted data into a distributed directory, the partition servers must be synchronized with the original unpartitioned server. If not, LDIF export files produced by the ddsetup tool will fail to import.

    If you are creating a directory server instance that must be cryptographically synchronized with an existing directory server instance, synchronize the server instances before you do any of the following:

    • Start the second server instance

    • Run the idsbulkload command from the second server instance

    • Run the idsldif2db command from the second server instance
    See Appendix J. Synchronizing two-way cryptography between server instances for information about synchronizing directory server instances.

  3. Use idsldif2db or idsbulkload to load the data to the appropriate backend server.

    • ServerA (partition index 1) - ServerA.ldif

    • ServerB (partition index 2) - ServerB.ldif

    • ServerC (partition index 3) - ServerC.ldif

    • ServerD (partition index 4) - ServerD.ldif

    • ServerE (partition index 5) - ServerE.ldif

    The correct LDIF output must be loaded on to the server with the correct corresponding partition index value, otherwise the proxy server is not able to retrieve the entries.

    For more information about the ddsetup utility, see the IBM Tivoli Directory Server Command Reference.

 

Synchronizing information

There are two main kinds of configuration information that must be kept synchronized among the servers in a distributed directory.

Subtree policies

ACLs are currently the only type of subtree policy. ACLs are honored locally within a server only. When data is split across a flat container each server contains the parent entry. If ACLs are defined on the parent entry, they must be defined on each of the parent entries. ACLs defined at the parent level or below must not have any dependencies on entries above the parent entry in the tree. The server does not enforce ACLs defined on another server.

At setup time, exact copies of the entire parent entry are added to each server if ddsetup is used; otherwise, it is the user's responsibility to add copies of the entire parent entry to the server. If the parent entry has ACLs defined on it, each server has the same ACLs for the entries below the parent after initial configuration. Any changes made to the parent entries after initial configuration have to be sent to each server containing the parent entry without using the proxy server. It is the administrator's responsibility to keep the parent entries (including the ACLs on the parent) synchronized among the servers.

Global policies including schema and password policy

The cn=ibmpolicies and cn=schema subtree store global configuration and must be replicated among the servers in a distributed directory. Set gateway replication agreements under the cn=ibmpolicies subtree, so that if any of the servers have a replica, the change is passed on to their individual replica. With the cn=ibmpolicies replication agreement, the cn=schema and cn=pwdpolicy subtrees are automatically replicated. Global policies include the global administration group entry stored under cn=ibmpolicies. See Global administration group for more information.
Notes:

  1. The global policies are not replicated to the proxy server.

  2. Changes to cn=schema is not replicated to the proxy server.
Attention: When you create a new directory server instance, be aware of the information that follows. If you want to use a distributed directory, you must cryptographically synchronize the server instances to obtain the best performance.

If you are creating a directory server instance that must be cryptographically synchronized with an existing directory server instance, synchronize the server instances before you do any of the following:

  • Start the second server instance

  • Run the idsbulkload command from the second server instance

  • Run the idsldif2db command from the second server instance
See Appendix J. Synchronizing two-way cryptography between server instances for information about synchronizing directory server instances.

 

Partition entries

Entries that exist as the base of a partition, for example, o=sample, cannot be accessed through the proxy server. The proxy server can return one of these entries during a search (the proxy searches for duplicates, and any entry returned is a random entry), but these entries cannot be modified using the proxy server.

 

Setting up a distributed directory with a proxy server

The following scenario shows how to set up the a proxy server and a distributed directory with three partitions for the subtree o=sample.

The illustration shows a single proxy server distributing the data for the subtree o=sample across three servers. ServerA has a partition value or hash value of 1, ServerB has a hash value of 2 and ServerC has a hash value of 3.

 

Setting up the back-end servers

Use one of the following methods to set up the back-end servers:

 

Using Web Administration

Adding the suffix to the backend servers

To add the suffix, use one of the following methods.

  1. Log on to ServerA, click Server administration in the Web Administration navigation area and then click Manage server properties in the expanded list. Next, click the Suffixes tab.

  2. Enter the Suffix DN, o=sample.

  3. Click Add.

  4. Repeat this process for as many suffixes as you want to add.

  5. When you are finished, click Apply to save your changes without exiting, or click OK to apply your changes and exit.

  6. Repeat this procedure for ServerB and ServerC.

For more information see Adding and removing suffixes.

Global administration group

The global administration group is a way for the directory administrator to delegate administrative rights in a distributed environment to the database backend. Global administrative group members are users that have been assigned the same set of privileges as the administrative group with regard to access entries in the database backend and have complete access to the directory server backend. All global administrative group members have the same set of privileges.

Global administrative group members have no privileges or access rights to any data or operations that are related to the configuration settings of the directory server. This is commonly called the configuration backend.

Global administrative group members cannot access schema data.

Global administrative group members also do not have access to the audit log. Local administrators, therefore, can use the audit log to monitor global administrative group member activity for security purposes.

The global administration group should be used by applications or administrators to communicate with the proxy server using administrative credentials. For example, the member that was set up using these instructions (cn=manager,cn=ibmpolicies) should be used in place of the local administrator (cn=root) when directory entries are to be modified through the proxy server. Binding to the proxy server as cn=root gives an administrator full access to the proxy server's configuration, but only anonymous access to the directory entries.

Creating a user entry for membership in the global administrators group

  1. Log onto ServerA. This is the server that you specified as the partition for cn=ibmpolicies.

  2. Start the server.

  3. From the navigation area, expand the Directory management topic.

  4. Click Add an entry. See Adding an entry for additional information.

  5. From the Structural object class drop-down menu, select person.

  6. Click Next.

  7. Click Next to skip the Select auxiliary object classes panel.

  8. Type cn=manager in the Relative DN field.

  9. Type cn=ibmpolicies in the Parent DN field.

  10. Type manager in the cn field.

  11. Type manager in the sn filed.

  12. Click the Optional attributes tab.

  13. Type a password in the userPassword field. For example mysecret.

  14. Click Finish.
Adding the user entry to the global administration group

The following steps add cn=manager to the global administration group.

  1. In the navigation area, click Manage entries.

    The Current location field displays the current level of an entry in DIT tree in URL format. The suffix node in the DIT is displayed in the ldap://hostname:port format. The next level is displayed when you click a RDN from the RDN column in the Manage entries table. This displays DIT at that level. To go up at any level in the displayed DIT tree, click the required URL in the Current location field.

  2. Select the radio button for cn=ibmpolicies and click Expand.

    An expandable entry indicates that the entry has child entries. Expandable entries have a plus '+' sign next to them in the Expand column. We can click the '+' sign next to the entry to view the child entries of the selected entry.

  3. Select the radio button for globalGroupName=GlobalAdminGroup and from the Select action drop-down menu select Manage members and click Go.

  4. Specify the maximum number of members to return for a group. If you click Maximum number of members to return, enter a number. Otherwise, click Unlimited.

  5. To load the members into the table, click Load or select Load from Select Action and click Go.

  6. Type cn=manager,cn=ibmpolicies in the member field and click Add.

  7. A message is displayed: You have not loaded entries from the server. Only your changes will be displayed in the table. Do you want to continue?, click OK.

  8. cn=manager is displayed in the table. Click Ok. cn=manager is now a member of the global administration group.

 

Using the command line

Adding the suffix to the backend servers

For information about adding the suffix to the backend servers using command line see Adding and removing suffixes.

Creating and adding a user entry for membership in the global administrators group

Issue the commands:

idsldapadd -h <ServerA> -D <admin_dn> -w <admin_pw> -f <LDIF1>
idsldapmodify -h <ServerA> -D <admin_dn> -w <admin_pw> -f <LDIF2>

where <LDIF1> contains:

dn: cn=manager,cn=ibmpolicies
objectclass: person
sn: manager
cn: manager
userpassword: secret

and where <LDIF2> contains:

dn: globalGroupName=GlobalAdminGroup,cn=ibmpolicies
changetype: modify
add: member
member: cn=manager,cn=ibmpolicies

 

Setting up the proxy server

Use one of the following methods to set up the proxy server:

 

Using Web Administration

Configuring the proxy server

If the server you are configuring as a proxy server contains the entry data that you want to distribute across the directory, extract the entry data into an LDIF file before you configure the server. After the server is configured as a proxy server we cannot access the data that is contained in its RDBM. If we need to access the data in its RDBM, we can either reconfigure the server so that it is not a proxy or create a new directory server instance that points to the RDBM as its database.

  1. Log onto the server that you are going to use as the proxy server.

  2. Start the server in configuration only mode.

  3. From the navigation area expand Proxy administration .

  4. Click Manage proxy properties.

  5. Select the Configure as proxy server check box.

  6. In the Suffix DN field enter cn=ibmpolicies and click Add.

  7. In the Suffix DN field enter o=sample and click Add.

  8. To enable all groups processing, select the Enable distributed groups check box. By default, this check box is selected. The attribute ibm-slapdProxyEnableDistGroups under the ibm-slapdProxyBackend object class in the configuration file is associated with this control.

    A distributed group is a group where group entries and member DN's are located in different partitions. When all group processing is disabled the proxy server will not perform any distributed group evaluation. This is helpful if distributed directories do not contain any groups or distributed groups as the proxy server can avoid additional group processing in such cases. However, if groups are disabled at the proxy server level and the data on the backend servers contain distributed groups, the behavior is not supported and is undefined. The proxy server will be unable to detect this, so no warnings or errors will be issued.

  9. To enable dynamic groups processing, select the Enable distributed dynamic groups check box. By default, this check box is selected. The attribute ibm-slapdProxyEnableDistDynamicGroups under the ibm-slapdProxyBackend object class in the configuration file is associated with this control.

    Distributed dynamic groups are dynamic groups that are defined when some or all of the members reside in a different partition. If distributed dynamic groups do not exist, dynamic group processing can be avoided. Dynamic groups must be enabled for this setting to have an impact. By selecting or clearing the Enable dynamic group check box we can enable or disable dynamic group processing.

  10. Click OK to save your changes and return to the Introduction panel.

    You must log off the Web Administration, and log in again. Doing so will update the navigation area. If you do not log off and then log on again, the navigation area is not updated for a proxy server.

Identifying the distributed directory servers to the proxy server

  1. Expand Proxy administration from the navigation area and click Manage back-end directory servers.

  2. Click Add.

  3. Enter the host name for ServerA in the Hostname field.

  4. Enter the port number for ServerA (for this example all servers use 389).

  5. Enter the number of connections that the proxy server can have with the back-end server in the Connection pool size field. The minimum value is 1 and the maximum value is 100. For this example, set the value to 5.

    • Do not set the value in the Connection pool size field to be less than 5.

    • Number of connections to the back-end server should be less than or equal to the number of workers configured on the back-end server.

  6. Enter duration in seconds to schedule health check runs by the server.

    This edit box is displayed only for proxy server with version 6.1.

  7. The authentication method for the back-end directory server is set to "Simple", by default. Verify that the Enable SSL encryption checkbox is not selected.

  8. Click Next.

  9. Specify the administrator DN, the DN of a member of the local administrator, or a member of a global admin group in the Bind DN field. For example, cn=root.

  10. Specify and confirm the administration password, in the Bind password fields. For example, secret.

  11. Click Finish.

  12. Repeat steps 2 through 10 for ServerB and ServerC.

  13. When you are finished, click Close to save your changes and return to the Introduction panel.

  14. Ensure that all the back-end servers are started.

    If the proxy server cannot connect with one or more of the back-end servers at start up, the proxy starts in configuration mode only. This is true unless you set up server groups. See Server groups.

Synchronizing global policies

These steps set up cn=ibmpolicies as a single partition. This is necessary to enable you to synchronize the global policies on all of the servers.

Schema modifications are not replicated by or to the proxy server. Any schema updates need to be entered on each proxy server manually.

  1. From the navigation area, click Manage partition bases.

  2. On the Partition bases table, click Add.

  3. Enter a split name in the Split Name field.

    This value represents the split name provided for a split point that splits a partition base DN into partitions. The ibm-slapdProxySplitName attribute in the ibm-slapdProxyBackendSplitContainer object class is associated with this split name. The value of the ibm-slapdProxySplitName attribute must be unique within a proxy server's configuration file and must only contain alphanumeric values. For example, if a directory is split at DN "o=sample" into two partitions, the split name is associated with the o=sample split and the two partitions. To uniquely identify a split partition use the ibm-slapdProxySplitName and ibm-slapdProxyPartitionIndex attributes.

  4. Enter cn=ibmpolicies in the Partition base DN field.

  5. Enter 1 in the Number of partitions field.

    A value greater than 1 for cn=ibmpolicies is not supported.

  6. Enable or disable auto fail-back by selecting the appropriate item from the Auto fail-back enabled combo box.
    Note:

    If a backend server is restarted and if autofailback is enabled, the proxy server will automatically start using that backend server.

  7. Enable or disable proxy high consistency by selecting the appropriate item from the Proxy high consistency combo box. For more information see High consistency and failover when high consistency is configured

  8. Click OK.

  9. Select the radio button for cn=ibmpolicies and click View servers.

  10. Verify that cn=ibmpolicies is displayed in the Partition base DN field.

  11. In the Back-end directory servers for partition base table, click Add.

  12. From the Back-end directory server menu, select ServerA.

  13. Enter 1 in the Partition index field.

  14. From the Server role combo box, select a role for the back-end directory server.
    Note:

    The available roles that you can assign for a back-end directory server are primarywrite and any. Primary write server should be set to a master or peer server where write requests should be sent.

  15. From the Proxy tier combo box, select a priority that you want to assign. For more information, see Weighted Prioritization of backend servers.

  16. Click OK.
Dividing the data into partitions

These steps divide the data in the subtree o=sample into three partitions.

  1. On the Partition bases table, click Add.

  2. Enter a split name in the Split Name field.

  3. Enter o=sample in the Partition base DN field.

  4. Enter 3 in the Number of partitions field.

  5. Enable or disable auto fail-back by selecting the appropriate item from the Auto fail-back enabled combo box.

  6. Enable or disable proxy high consistency by selecting the appropriate item from the Proxy high consistency combo box.

  7. Click OK.
Assigning partition index values to the servers

These steps assign a partition value to each of the servers.

  1. Select the radio button for o=sample and click View servers.

  2. Verify that o=sample is displayed in the Partition base DN field.

  3. In the Back-end directory servers for partition base table, click Add.

  4. From the Back-end directory server drop-down menu, select ServerA.

  5. Ensure that 1 is displayed in the Partition index field.

  6. From the Server role drop-down menu, select the appropriate server role.
    Note:

    This value represents the role of a back-end directory server in a particular partition. The ibm-slapdProxyServerRole attribute in the ibm-slapdProxyBackendSplit object class is associated with this value. The values that can be assigned to this attribute are primarywrite or any.

  7. From the Proxy tier combo box, select a priority that you want to assign.

  8. Click OK.

  9. In the Back-end directory servers for partition base table, click Add.

  10. From the Back-end directory server drop-down menu, select ServerB.

  11. Ensure that 2 is displayed in the Partition index field.
    Note:

    This number is automatically incremented for you. We can manually change the partition index number, however, it cannot exceed the actual number of partitions for the base. For example, we cannot use 4 as a partition index, if the partition base has only three partitions. Duplicate partition indexes are only allowed on servers participating in replication on that subtree.

  12. Click OK.

  13. In the Back-end directory servers for partition base table, click Add.

  14. From the Back-end directory server drop-down menu, select ServerC.

  15. Ensure that 3 is displayed in the Partition index field.

  16. From the Server role drop-down menu, select the appropriate server role.
    Note:

    This value represents the role of a back-end directory server in a particular partition. The ibm-slapdProxyServerRole attribute in the ibm-slapdProxyBackendSplit object class is associated with this value. The values that can be assigned to this attribute are primarywrite or any.

  17. From the Proxy tier combo box, select a priority that you want to assign.

  18. Click OK.

  19. When you are finished, click Close.

  20. Restart the proxy server for the changes to take effect.
Viewing partition bases

Do the following to view partition bases:

  1. From the navigation area, click View partition bases.

  2. Select a split from the Select a spilt combo box.

  3. Click Show partitions. This populates the Partition entries table with the available partitions for the selected spilt.

Do the following to view server entries for a partition:

  1. Select a partition entry from the Partition entries table.

  2. Click Show servers. This populates the Server entries table with the server information associated with the selected partition of a split.
Viewing entry location

Do the following to view the location of a DN entry in a distributed directory.

  1. From the navigation area, click View entry location.

  2. To search the location of a DN entry in a distributed directory, select the Entry DN option button, and then enter a valid DN in the text box.

  3. Click the Show entry details button. This will populate the Location details table with the location information of the specified DN entry.

  4. Click the Close button to move to Introduction panel.

Do the following to view the locations of multiple DN entries in distributed directories.

  1. To search the location of multiple DN entries in distributed directory, select the Select a file containing multiple DN option button.

  2. Enter the absolute path of the file containing the multiple DN entries in the File name text box or click the Browse button and specify the location of the text file that contains DN entries.

  3. Click the Submit file button.

  4. Click the Show entry details button. This will populate the Location details table with the location information of the DN entries.

  5. Click the close button to move to Introduction panel.

 

Using the command line

Configuring the proxy server

Issue the commands:

idsldapmodify -h <Proxy Server> -D <admin_dn> -w <admin_pw> -i <LDIF1>
idsldapmodify -h <Proxy Server> -D <admin_dn> -w <admin_pw> -i <LDIF2>

where <LDIF1> contains:

dn: cn=Configuration
changetype: modify
replace: ibm-slapdServerBackend
ibm-slapdServerBackend: PROXY

and where <LDIF2> contains:

dn: cn=ProxyDB, cn=Proxy Backends, cn=IBM Directory, cn=Schemas, cn=Configuration
changetype: modify
add: ibm-slapdSuffix
ibm-slapdSuffix: cn=ibmpolicies
ibm-slapdSuffix: o=sample
-
replace: ibm-slapdProxyEnableDistDynamicGroups
ibm-slapdProxyEnableDistDynamicGroups: true
-
replace: ibm-slapdProxyEnableDistGroups
ibm-slapdProxyEnableDistGroups: true
Identifying the distributed directory servers to the proxy server

Issue the commands:

idsldapadd -h <Proxy Server> -D <admin_dn> -w <admin_pw> -f <LDIF1>

where <LDIF1> contains:

dn: cn=Server1, cn=ProxyDB, cn=Proxy Backends, cn=IBM Directory, cn=Schemas, 
	cn=Configuration
cn: Server1
ibm-slapdProxyBindMethod: Simple
ibm-slapdProxyConnectionPoolSize: 5
ibm-slapdProxyDN: cn=root
ibm-slapdProxyPW: secret
ibm-slapdProxyTargetURL: ldap://ServerA:389
objectClass: top
objectClass: ibm-slapdProxyBackendServer
objectClass: ibm-slapdConfigEntry

dn: cn=Server2, cn=ProxyDB, cn=Proxy Backends, cn=IBM Directory, cn=Schemas, 
	cn=Configuration
cn: Server2
ibm-slapdProxyBindMethod: Simple
ibm-slapdProxyConnectionPoolSize: 5
ibm-slapdProxyDN: cn=root
ibm-slapdProxyPW: secret
ibm-slapdProxyTargetURL: ldap://ServerB:389
objectClass: top
objectClass: ibm-slapdProxyBackendServer
objectClass: ibm-slapdConfigEntry

dn: cn=Server3, cn=ProxyDB, cn=Proxy Backends, cn=IBM Directory, cn=Schemas, 
	cn=Configuration
cn: Server3
ibm-slapdProxyBindMethod: Simple
ibm-slapdProxyConnectionPoolSize: 5
ibm-slapdProxyDN: cn=root
ibm-slapdProxyPW: secret
ibm-slapdProxyTargetURL: ldap://ServerC:389
objectClass: top
objectClass: ibm-slapdProxyBackendServer
objectClass: ibm-slapdConfigEntry
Dividing the data into partitions and assigning partition index values to the servers

Issue the commands:

idsldapadd -h <Proxy Server> -D <admin_dn> -w <admin_pw> -f <LDIF2>

where <LDIF2> contains:

dn: cn=cn\=ibmpolicies split, cn=ProxyDB, cn=Proxy Backends, cn=IBM Directory, 
	cn=Schemas, cn=Configuration
cn: cn=ibmpolicies split
ibm-slapdProxyNumPartitions: 1
ibm-slapdProxyPartitionBase: cn=ibmpolicies
ibm-slapdProxySplitName: ibmpolicysplit 
objectclass: top
objectclass: ibm-slapdConfigEntry
objectclass: ibm-slapdProxyBackendSplitContainer

dn: cn=split1, cn=cn\=ibmpolicies split, cn=ProxyDB, cn=Proxy Backends, 
	cn=IBM Directory, cn=Schemas, cn=Configuration
cn: split1
ibm-slapdProxyBackendServerDN: cn=Server1,cn=ProxyDB,cn=Proxy Backends,
	cn=IBM Directory,cn=Schemas,cn=Configuration
ibm-slapdProxyPartitionIndex: 1
ibm-slapdProxyBackendServerRole: any
objectclass: top
objectclass: ibm-slapdConfigEntry
objectclass: ibm-slapdProxyBackendSplit

dn: cn=cn\=pwdpolicy split, cn=ProxyDB, cn=Proxy Backends, cn=IBM Directory, 
	cn=Schemas, cn=Configuration
cn: cn\=pwdpolicy split
ibm-slapdProxyNumPartitions: 1
ibm-slapdProxyPartitionBase: cn=pwdpolicy
ibm-slapdProxySplitName: pwdpolicysplit 
objectclass: top
objectclass: ibm-slapdConfigEntry
objectclass: ibm-slapdProxyBackendSplitContainer

dn: cn=split1,cn=cn\=pwdpolicy split, cn=ProxyDB, cn=Proxy Backends, 
	cn=IBM Directory, cn=Schemas, cn=Configuration
cn: split1
ibm-slapdProxyBackendServerDN: cn=Server1,cn=ProxyDB,cn=Proxy Backends,
	cn=IBM Directory,cn=Schemas,cn=Configuration
ibm-slapdProxyPartitionIndex: 1
ibm-slapdProxyBackendServerRole: any
objectclass: top
objectclass: ibm-slapdConfigEntry
objectclass: ibm-slapdProxyBackendSplit

dn: cn=o\=ibm\,c\=us split, cn=ProxyDB, cn=Proxy Backends, cn=IBM Directory, 
	cn=Schemas, cn=Configuration
cn: o=ibmc=us split
ibm-slapdProxyNumPartitions: 3
ibm-slapdProxyPartitionBase: o=sample
ibm-slapdProxySplitName: IBMUSsplit 
objectclass: top
objectclass: ibm-slapdConfigEntry
objectclass: ibm-slapdProxyBackendSplitContainer

dn: cn=split1, cn=o\=ibm\,c\=us split, cn=ProxyDB, cn=Proxy Backends, 
	cn=IBM Directory, cn=Schemas, cn=Configuration
cn: split1
ibm-slapdProxyBackendServerDN: cn=Server1,cn=ProxyDB,cn=Proxy Backends,
	cn=IBM Directory,cn=Schemas,cn=Configuration
ibm-slapdProxyPartitionIndex: 1
ibm-slapdProxyBackendServerRole: any
objectclass: top
objectclass: ibm-slapdConfigEntry
objectclass: ibm-slapdProxyBackendSplit

dn: cn=split2, cn=o\=ibm\,c\=us split, cn=ProxyDB, cn=Proxy Backends, 
	cn=IBM Directory, cn=Schemas, cn=Configuration
cn: split2
ibm-slapdProxyBackendServerDN: cn=Server2,cn=ProxyDB,cn=Proxy Backends,
	cn=IBM Directory,cn=Schemas,cn=Configuration
ibm-slapdProxyPartitionIndex: 2
ibm-slapdProxyBackendServerRole: any
objectclass: top
objectclass: ibm-slapdConfigEntry
objectclass: ibm-slapdProxyBackendSplit

dn: cn=split3, cn=o\=ibm\,c\=us split, cn=ProxyDB, cn=Proxy Backends, 
	cn=IBM Directory, cn=Schemas, cn=Configuration
cn: split3
ibm-slapdProxyBackendServerDN: cn=Server3,cn=ProxyDB,cn=Proxy Backends,
	cn=IBM Directory,cn=Schemas,cn=Configuration
ibm-slapdProxyPartitionIndex: 3
ibm-slapdProxyBackendServerRole: any
objectclass: top
objectclass: ibm-slapdConfigEntry
objectclass: ibm-slapdProxyBackendSplit

 

Password policy in a distributed directory

Password Policy in a distributed directory is enforced on the backend servers with some additional overhead in the proxy server. There are two kinds of user password policies: Global and multiple password policies. Multiple password policies is supported in the distributed directory environment only if all the groups, members, and policy data is local to a single partition. On the other hand, global password policy is supported, even when users and groups are distributed.

In order for the proxy server to support password policy it must be enabled on all backend servers. The proxy server will send password policy controls on all necessary requests. The majority of password policy enforcement is done locally on the backend servers, and therefore will function the same way as it does in a non-distributed environment. In some cases additional checking must be done at the proxy server level to ensure consistent password policy enforcement.

Note:

  • Multiple password policy in a distributed directory is enforced only for 6.1 backend servers.

  • If an administrator enables password policy or updates the password policy settings on any backend server, the proxy server must be restarted.

  • The proxy server does not support effective password policy extended operation.

The proxy server uses two extended operations to enable password policy enforcement for external binds. The extended operations are Password Policy Initialize and Verify Bind Extended operation and the Password Policy Finalize and Verify Bind Extended operation. For further information about these two extended operations refer the IBM Tivoli Directory Server Programming Reference Version 6.1.

 

Failover and load balancing

The proxy server is aware of all of the replicas of a given partition, and load balances read requests between the online replicas. The proxy server is aware of all of the masters for a given partition, and must use one of these as the primary master. The server configured as the primary write server is the primary master. If no primary write server is configured the first master or peer server is the primary write server. If the primary write server is down, the proxy server is capable of failing over to a backup server (one of the other master or peer servers). If the requested operation cannot be performed by the currently online servers, the proxy server returns an operations error.

Note:

  • For better performance, all backend servers and the proxy server should share the same stash files.

  • Compare operations are not load balanced.

The proxy server performs load balancing on read requests when high consistency is disabled. On the other hand, when high consistency is enabled all read and write requests are sent to the primary write server until a failover occurs. See High consistency and failover when high consistency is configured for further information

If a backend server is unavailable, the operation will error out. All subsequent operations will fail over to the next available server.

 

Auto failback

TDS provides an option to enable and disable auto failback. When auto failback is enabled, the proxy server starts using a server as soon as it becomes available. However, when auto failback is disabled, servers must be restored using the resume role extended operation, except in the following cases where auto failback is always enabled:

Cases that always invoke auto failback and the action taken:

  • All back-end servers go down in a partition.

  • Action Taken:

    • If a read server is the first server to come back online, the proxy server will auto restore that server. Since read servers cannot handle write operations, the first write server to come back online will also be restored.

    • If a write server is the first server to come back online the proxy server will auto restore that write server. Since write servers can handle both read and write requests no additional servers will be automatically restored.

  • All Writeable Backend Servers go down in a partition.

  • Action Taken

    • The first write server to come back online will be auto restored by the proxy server.
Note:

  • Autofailback can be enabled or disabled by setting the value of the attribute ibm-slapdEnableAutoFailBack to true or false.

  • The default value of ibm-slapdEnableAutoFailBack is true.

 

Health Check Feature

The proxy server back-end uses a thread named health check to identify the servers that are available and the servers that are down. The health check thread runs a health check by initiating a root DSE search for the ibm-slapdisconfigurationmode attribute against each of the back-end servers. If the Root DSE search against any server fails, either because the server is down or if the server is in configuration only mode, the thread begins the failover process and marks the server as unavailable. After a server is identified as unavailable, an appropriate error message is also written to the error log.

The health check feature has the ability to detect when back-end servers become unresponsive. To enable this feature, you set the PROXY_HEALTHCHECK_OLIMIT environment variable. The value of the olimit indicates the threshold for the number of outstanding health check requests before the proxy server determines that a back-end server is unresponsive.

Let us consider an example where the health check interval is set to 5 seconds and the olimit is set to 5. In this case, if the back-end server does not respond to the health check searches within 25-30 seconds, the proxy server will mark the back-end server as disconnected and will failover to the next available server. Subsequently, a message is also logged at this time (GLPPXY044E).

When such a message is logged it could be either because the back-end server is overloaded and the server needs performance tuning or hardware upgrade or there could be some kind of error condition in the back-end server that needs to be addressed. The proxy server updates the state of the back-end server to ready when the back-end server can successfully respond to root dse searches. If auto failback is enabled, the server is restored at this time. If auto failback is disabled an administrator can use the resume role extended operation to resume use of the server.

Note:

  • Caution should be used when configuring the PROXY_HEALTHCHECK_OLIMIT. If the proxy server is under heavy load and the olimit value set is too small, the proxy may falsely report that the back-end server is unresponsive. To correct this problem, the olimit should be increased.

  • If the PROXY_HEALTHCHECK_OLIMIT environment variable is missing, healthcheck outstanding limit will default to 120 seconds.

 

Health Check Status Interval Configuration

The ibm-slapdStatusInterval attribute represents the time interval between health check runs scheduled by the server. This attribute is not a dynamic attribute and the default value is set to 0. The value 0 disables the health check. An administrator can modify the value of this attribute to best suit the environment.

 

High consistency and failover when high consistency is configured

Sometimes, high consistency is required by applications. For instance, an application may write some data then immediately perform a search to ensure the update was correct. In a high consistency environment, the proxy server does not round robin read operations. Instead, the proxy sever directs all read and write operations for a single partition to a single back-end server.

High Consistency is configurable on a per split basis. To enable high consistency, we need to set the attribute ibm-slapdProxyHighConsistency to true.

The sample entry below specifies that High consistency is enabled for the split container having partition base o=sample.

Sample Entry
dn: cn=O\=ibm\, c\=us split, cn=ProxyDB, cn=Proxy Backends, cn=IBM Directory, 
cn=Schemas, cn=Configuration
cn: o=sample split
ibm-slapdProxyNumPartitions: 1
ibm-slapdProxyPartitionBase: o=sample
ibm-slapdProxySplitName: ibmUS
ibm-slapdEnableAutoFailBack: true
ibm-slapdProxyHighConsistency: true
objectclass: top
objectclass: ibm-slapdConfigEntry
objectclass: ibm-slapdProxyBackendSplitContainer

All read and write operations within a single partition are directed to a single back-end. When the primary back-end server goes down the proxy will failover to a secondary server that is configured. All read and write operations will then be directed towards that server until the primary server is restored.

 

Weighted Prioritization of backend servers

The proxy server prioritizes back-end servers into 5 possible tiers. At a given time the proxy server will only use servers in one tier. When all the write servers within a tier fail the proxy server will failover to the second tier. When the second tier fails it will failover to the third tier, so on and so forth.

Weighted prioritization is configurable for each back-end server within a split. This is done by setting a value for the attribute ibm-slapdProxyTier. The default value for this attribute is 1 and if the attribute is not present the proxy will treat the back-end server as a tier one server. Valid values for this attribute range from 1 to 5.

During startup, all servers in all tiers will be contacted. If the administrator wants the proxy server to start up even if some of the back-end servers in different tiers are not available, then server groups can be used. For more information about server groups, see Server groups.

 

Failover between proxy servers

In a proxied directory, failover support between proxies is provided by creating an additional proxy server that is identical to the first proxy server. These are not the same as peer masters, the proxy servers have no knowledge of each other and must be managed through a load balancer.

A load balancer, such as the IBM WebSphere® Edge Server, has a virtual host name that applications use when sending updates to the directory. The load balancer is configured to send those updates to only one server. If that server is down, or unavailable because of a network failure, the load balancer sends the updates to the next available proxy server until the first server is back on line and available. Refer to your load balancer product documentation for information on how to install and configure the load balancing server.

The illustration shows a load balancer managing two proxy servers. Each of the proxy servers is a supplier to the three servers that are their consumers. There are no supplier or consumer agreements between the two proxy servers.
Note:

In a load-balanced proxy environment, if a proxy server fails, the first operation sent to it fails and returns an error. All subsequent operations are sent to the failover proxy server. The first operation that failed can be retried. It is not automatically sent to the failover server.

 

Setting up backup replication for a distributed directory with proxy servers

In this example you are going to set up a distributed directory and use replication to provide read and write backup capabilities. The three partitions for the suffix o=sample has a corresponding hash value (H1, H2, or H3). Each partition has its own replication site consisting of two peer servers and a replica to provide the read write backup capabilities. Each proxy server has knowledge of all the servers in the topology (indicated by the dashed connections). The relationships among the servers in each replication site is represented by the solid lines.

The illustration shows a load balancer managing two proxy servers. There are three partitions for the subtree o=sample. Each partition contains two peer servers and a replica for a total of nine servers. The two proxy servers have agreements with all nine servers. There are no supplier or consumer agreements between the two proxy servers.

To create this scenario

  1. Create an LDIF file for the data you are going to partition. See Creating an LDIF file for your data entries

  2. Create a replication topology for the data subtree. See Setting up the replication topology.

  3. Create a second replication topology for the cn=ibmpolicies subtree. See Setting up a topology for global policies.

  4. Set up the proxy servers. See Setting up the proxy servers

  5. Partition existing data. See Partitioning the data.

  6. Load the data. See Loading the partitioned data.

  7. Start replication. See Starting replication

For more information about setting up replication, see Replication.

 

Server groups

If the proxy server is unable to contact a backend server, or if authentication fails, then proxy server startup fails and the proxy server starts in configuration only mode by default, unless server groupings have been defined in the configuration file.

Server groupings enable the user to state that several backend servers are mirrors of each other, and proxy server processing can continue even if one or more backend servers in the group is down, assuming that at least one backend server is online. Connections are restarted periodically if the connections are closed for some reason, such as the remote server is stopped or restarted.

The proxy configuration file supports a special set of entries that enable a directory administrator to define server groups in the configuration file. Each group contains a list of backend servers. As long as at least one backend server in each group can be contacted, the proxy server will start successfully and service client requests, though performance might be degraded. Each backend server in the entry is defined to have an OR relationship, and all the entries have an AND relationship.

The directory administrator must define the server groups using idsldapadd and idsldapmodify to add and modify the required entries. The directory administrator must ensure that each of the backend servers is placed in a server group and that the backend servers in each server group contain the same partition of the directory database. For example, suppose that server1 and server2 are peers of each other, with server3 and server4 being separate peers, that is, server1 and server2 hold a disjoint data set from server3 and server4. In this case, a user would add server1 and server2 in a server group entry under the cn=configuration suffix, and server3 and server4 in a separate server group entry. If either server1 or server2 is up, then the proxy server can proceed to check if either server3 or server4 is online. If neither server3 or server4 is up, then the proxy server starts in configuration only mode.

In addition to the server grouping, the administrator must add the serverID of each backend server in the server group entry. If the server is down, no root DSE information can be gained, and the serverID is needed for determining the supplier/consumer relationships throughout the topology.

Any backend servers not in a server group that are offline at proxy server startup cause the proxy server to start in configuration only mode.

The following is an example of user-defined server groupings:

dn: cn=serverGroup, cn=ProxyDB, cn=Proxy Backends, cn=IBM Directory, cn=Schemas, 
	cn=Configuration
cn: serverGroup
ibm-slapdProxyBackendServerDN: cn=Server1,cn=ProxyDB,cn=Proxy Backends,
	cn=IBM Directory,	cn=Schemas,cn=Configuration
ibm-slapdProxyBackendServerDN: cn=Server2,cn=ProxyDB,cn=Proxy Backends,
	cn=IBM Directory,	cn=Schemas,cn=Configuration
objectclass: top
objectclass: ibm-slapdConfigEntry
objectclass: ibm-slapdProxyBackendServerGroup
Notes:

  1. In each entry pointed to by ibm-slapdProxyBackendServerDn, the attribute ibm-slapdServerId must be added, with its value identical to the value on the corresponding backend server.

  2. Web Administration Tool support for server groupings is not available. It is the administrator's responsibility to keep these entries in sync and correct with the distributed configuration. LDAP protocol must be used to maintain the entries.

 

Creating an LDIF file for your data entries

To create an LDIF file (mydata.ldif) for the data entries in the subtree o=sample if they currently reside on a server:

  • Issue the command:
    idsdb2ldif  -o mydata.ldif -s o=sample  -I <instance_name> 
    		-k <key seed> -t <key salt>
Note:

You must use the -I option if there is more than one instance. You must use the -k and -t options if keys on the server are not in sync.

Attention: If you are exporting data that will be imported into an Advanced Encryption Standard (AES)-enabled server and if the two servers are not cryptographically synchronized, see Appendix J. Synchronizing two-way cryptography between server instances for information about cryptographic synchronization of servers.

See the idsdb2ldif command information in the IBM Tivoli Directory Server version 6.1 Command Reference for more information.

 

Setting up the replication topology

Ensure that you understand replication concepts and terms before attempting to create this scenario. See Replication, if you do not understand the concept of replication.

In this topology created using the Web Administration Tool, each partition is treated as a separate replication site. However, there are no gateway servers in this topology because you do not want the partitioned data to be replicated to the other partitions.

Note:

At this point you are creating the topology. Do not load any entry data.

  1. Log onto ServerA and, if you have not already done so, add the subtree o=sample. Doing this makes ServerA a master server for o=sample. See Adding a subtree.

  2. Create a set of credentials for the topology. See Adding credentials.

  3. Add ServerA2 as a peer-master server. See Adding a peer-master or gateway server.

  4. Add ServerA3 as a replica. Ensure that the supplier agreement with ServerA2 is selected. See Adding a replica server.
    Note:

    We can either log on to ServerB and ServerC to create similar topologies as you did with ServerA or continue to create the topology from ServerA. Remember that if you continue to add the topology from ServerA, deselect any agreements that the Web Administration Tool tries to create that are not appropriate for the topology. For example, no agreement can exist between any of the "A" servers and any of the "B" or "C" servers. Conversely, none of the "B" servers can have any agreements with any of the "A" or "C" servers.

  5. Add ServerB as a master server for the subtree o=sample. See Adding a peer-master or gateway server. Remember to deselect any agreements with ServerA, Server A2, and ServerA3.

  6. Add ServerB2 as a peer-master server of Server B. See Adding a peer-master or gateway server. Remember to deselect any agreements with ServerA, Server A2, and ServerA3.

  7. Add ServerB3 as a replica. Deselect any supplier agreements from ServerA and ServerA2 are selected. See Adding a replica server.

  8. Add ServerC as a master server for the subtree o=sample. See Adding a peer-master or gateway server. Remember to deselect any agreements with ServerA, Server A2, ServerA3, ServerB, ServerB2, and ServerB3.

  9. Add ServerC2 as a peer-master server of Server B. See Adding a peer-master or gateway server. Remember to deselect any agreements with ServerA, Server A2, ServerA3, ServerB, ServerB2, and ServerB3.

  10. Add ServerC3 as a replica. Deselect any supplier agreements from ServerA, ServerA2, ServerB, and ServerB2. See Adding a replica server.

For more information about setting up replication, see Replication.

 

Setting up a topology for global policies

You need to set up a second topology for the cn=ibmPolicies subtree to replicate global policy updates. For example you could use the same topology setup that you created for o=sample and make ServerA, ServerB, and ServerC gateway servers.

The illustration shows a load balancer managing two proxy servers. There is only one partition for the subtree cn=ibmPolicies. The partition contains three replication sites . Each site has gateway server, a peer server, and a replica for a total of nine servers. The proxy servers have agreements with the gateway and peer servers, but not the replica servers. There are no supplier or consumer agreements between the two proxy servers.

In this topology any updates made to any one of the servers is updated to all the servers.

Ensure that you create the appropriate agreements between the replication sites. See Setting up a gateway topology and Managing gateway servers for information on how to set up this kind of a topology.

You do not have to use the same topology model that you set up for the data subtree. You could create a topology in which servers A, A2, B, B2, C, and C2 are all peer servers with agreements amongst themselves and the replica servers A3, B3, and C3. The only requirement is that all the servers in your data subtree topology are included in the cn=ibmpolicies subtree topology.

Note:

Remember that schema changes are not replicated by the proxy servers. Entries that update the schema must be made on each of the proxy servers and on one of the peer-master servers in the cn=ibmpolicies topology.

 

Setting up the proxy servers

  1. Set up proxy server Proxy A:

    Follow the directions is Setting up the proxy server to set up your proxy server. Remember that when the instructions tell you to repeat steps for ServerB and ServerC, we need to perform those steps for ServerA2, ServerA3, ServerB2, ServerB3, ServerC2, and ServerC3 as well.

    Note:

    Remember to assign the correct partition values, when assigning partition values to the backend servers.

    Server name Partition index value
    ServerA 1
    ServerA2 1
    ServerA3 1
    ServerB 2
    ServerB2 2
    ServerB3 2
    ServerC 3
    ServerC2 3
    ServerC3 3

  2. Set up the second proxy server, Proxy B, the same way you set up Proxy A.

  3. Add a load balancer such as the IBM WebSphere Edge Server.

 

Partitioning the data

To partition the data contained in the mydata.ldif file you created for the subtree o=sample, issue the following command:

ddsetup -I ProxyA -B "o=sample" -i mydata.ldif

where
ProxyA:Is the proxy server instance

 

Loading the partitioned data

The correct LDIF output must be loaded on to the server with the correct corresponding partition index value, otherwise the proxy server is not able to retrieve the entries.

Depending upon the amount of your data, use idsldif2db or idsbulkload to load the data to the appropriate backend servers. Again, depending on the amount of data, loading the appropriate LDIF file to each server might be more efficient than having the data replicated.

  • ServerA (partition index 1) - ServerA.ldif

  • ServerA2 (partition index 1) - ServerA.ldif

  • ServerA3 (partition index 1) - ServerA.ldif

  • ServerB (partition index 2) - ServerB.ldif

  • ServerB2 (partition index 2) - ServerB.ldif

  • ServerB3 (partition index 2) - ServerB.ldif

  • ServerC (partition index 3) - ServerC.ldif

  • ServerC2 (partition index 3) - ServerC.ldif

  • ServerC3 (partition index 3) - ServerC.ldif

 

Monitor Search

Administrators can use monitor search to determine the current status of the proxy server. Monitor search does not actively query for status but it simply reports the current status that is available to the proxy server. This implies that if a back-end server is down and the proxy server has not discovered it yet then it will not be reported in the search result. A monitor search for "cn=partitions, cn=proxy, cn=monitor" will return one entry for each split point, partition, and server in each partition.

Note:

  • On a proxy server the cn=monitor search will show operations as completed before they are really completed. If operation counts are needed to detect actual completed operations, the cn=proxy,cn=monitor search must be used.

  • In a proxy server environment a single request from a client can map to multiple different kinds of requests in the proxy environment. For example, a bind maps to a compare, search, and a series of extended operations to evaluate group membership.

An example of monitor search for the searchbase "cn=partitions, cn=proxy, cn=monitor" is given below:

idsldapsearch -D <adminDN> -w <adminpw> -h <servername> -p <portnumber>
-b cn=partitions,cn=proxy,cn=monitor -s base objectclass=*

This command returns the following information:

Split Point Entry:
ibm-slapdProxySplitName= <configured name>, cn=partitions, cn=proxy, cn=monitor
ibm-slapdProxyPartitionBase= <configured base>
ibm-slapdProxyHighConsistencyEnabled = <true|false>
ibm-slapdProxyCurrentTier = <tier number> the current tier that the proxy 
server uses to process operations.

Partition Entry:
ibm-slapdProxyPartitionIndex= <index value>,ibm-slapdProxySplitName= <configured name>, 
cn=partitions,cn=proxy, cn=monitor
ibm-slapdProxyPartitionStatus : (active, readonly, unavailable)
ibm-slapdProxyPartitionIndex= <index value>

Server Entry:
ibm-slapdPort= <port> + ibm-slapdProxyBackendServerName= <server URL>,
ibm-slapdProxyPartitionIndex= <index value> ibm-slapdProxySplitName= <configured name>, 
cn=partitions, cn=proxy, cn=monitor 
ibm-slapdServerStatus: (active, unavailable)
ibm-slapdProxyCurrentServerRole: (primarywriteserver, readonlyserver, writeserver, notactive)
ibm-slapdProxyConfiguredRole: (primarywriteserver, readonlyserver, writeserver)
ibm-slapdProxyNumberofActiveConnections: <connection count> 

where

  • ibm-slapdProxyPartitionStatus:

    • active: If atleast one write server is active.

    • readonly: If no write servers are active, but atleast one read server is active.

    • unavailable: No servers are active in the partition.

  • ibm-slapdServerStatus:

    • active: The server is up and the proxy server has established connections to the server.

    • unavailable: The server is either started in configuration mode, or the proxy server is unable to establish a connection to the server with the proper authority.

  • ibm-slapdProxyCurrentRole:

    • primarywriteserver: The server is active and receiving all the write requests. If high consistency is enabled the server is also receiving all the read requests.

    • readonlyserver: The server is active and available for read only requests. the server will only be used if high consistency is disabled, or all the write servers are down.

    • writeserver The server is active and available. If high consistency is enabled, this server will not be used until failover. If high consistency is disabled, this server will be used as a read server until a failover situation.

    • notactive: This means that the server is currently not being used in this partition. This can mean one of two things: the server is unreachable, or the server is up, but has not been restored in this partition.

  • ibm-slapdProxyConfiguredRole: This is the role that the server was configured as. If no roles were specifically configured this value is set based on the proxy server's own discovery algorithm at start up.

  • ibm-slapdProxyNumberofActiveConnections: This is the actual number of connections that are open to the backend server.
Note:

If the connection is secure, ibm-slapdSecurePort attribute will be used instead of ibm-slapdPort.

A monitor search for cn=proxy,cn=monitor will provide counters for each kind of operation requested and completed by the proxy back-end. The filter supported by this search is objectclass=*. The counters related to all the back-end servers configured in the proxy server is given as an output of the monitor search. Following counters are returned by the proxy backend monitor search:

  • ops_requested - The number of operations requested by the Proxy Backend.

  • ops_completed - The number of operations completed by the Proxy Backend.

  • search_requested - The number of search operations requested by the Proxy Backend.

  • search_completed - The number of search operations completed by the Proxy Backend.

  • binds_requested - The number of bind operations requested by the Proxy Backend.

  • binds_completed - The number of bind operations completed by the Proxy Backend.

  • unbinds_requested - The number of unbind operations requested by the Proxy Backend.

  • unbinds_completed - The number of unbind operations completed by the Proxy Backend.

  • adds_requested - The number of add operations requested by the Proxy Backend.

  • adds_completed - The number of add operations completed by the Proxy Backend.

  • deletes_requested - The number of delete operations requested by the Proxy Backend.

  • deletes_completed - The number of delete operations completed by the Proxy Backend.

  • modrdns_requested - The number of modrdn operations requested by the Proxy Backend.

  • modrdns_completed - The number of modrdn operations completed by the Proxy Backend.

  • modifies_requested - The number of modify operations requested by the Proxy Backend.

  • modifies_completed - The number of modify operations completed by the Proxy Backend.

  • compares_requested - The number of compare operations requested by the Proxy Backend.

  • compares_completed - The number of compare operations completed by the Proxy Backend.

  • abandons_requested - The number of abandons operations requested by the Proxy Backend.

  • abandons_completed - The number of abandons operations completed by the Proxy Backend.

  • extops_requested - The number of extended operations requested by the Proxy Backend.

  • extops_completed - The number of extended operations completed by the Proxy Backend.

  • unknownops_requested - The number of unknown operations requested by the Proxy Backend.

  • unknownops_completed - The number of unknown operations completed by the Proxy Backend.

  • total_connections - The number of connections between the proxy backend and backend servers configured for the proxy server.

  • total_ssl_connections - The number of ssl connections between the proxy backend and backend servers configured for the proxy server.

  • used_connections - The number of used connections between the proxy backend and backend servers configured for the proxy server.

  • used_ssl_connections - The number of used ssl connections between the proxy backend and backend servers configured for the proxy server.

  • total_result_sent - The number of results sent by the proxy backend to the client since the proxy server was started.

  • total_entries_sent - The number of entries sent by the proxy backend to the client since the proxy server was started.

  • total_success_result_sent - The number of success results sent by the proxy backend to the client since the proxy server was started.

  • total_failed_result_sent - The number of failed results sent by the proxy backend to the client since the proxy server was started.

  • total_references_sent - The number of references sent by the proxy backend to the client since the proxy server was started (related to referrals).

  • transactions_requested - The number of transaction operations requested by the Proxy Backend.

  • transactions_completed - The number of transaction operations completed by the Proxy Backend.

  • transaction_prepare_requested - The number of prepare transaction operations requested by the Proxy Backend.

  • transaction_prepare_completed - The number of prepare transaction operations completed by the Proxy Backend.

  • transaction_commit_requested - The number of commit transaction operations requested by the Proxy Backend.

  • transaction_commited - The number of commit transaction operations completed by the Proxy Backend.

  • transaction_rollback_requested - The number of rollback transaction operations requested by the Proxy Backend.

  • transaction_rollbacked - The number of rollback transaction operations completed by the Proxy Backend.

 

Transactions in a Proxy

Transactions enable an application to group a set of entry updates. The proxy server can process concurrent transaction requests where all operations target a single backend server.

The proxy server utilizes the backend servers' s transaction functionality to complete the transaction requests. Transactions are enabled on the proxy server only if they are enabled on the backend servers. A message is logged at startup if the backend servers have transactions enabled. In addition, the prepare transaction extended operation is enabled only if it is enabled on the backend servers. A message is logged at start up if the backend servers do not support the prepare transaction request.

For best results, the maximum number of transactions configured on the proxy server must be at least one less than the number of connections available to each backend server. For example, if the connection pool value is set to 10, then the maximum number of transactions should be set to 9 or less. Also, if the backend servers have a small timeout value, then the proxy server's transactions will get rolled back on the smaller transaction timeout value.

 

Starting replication

If replication has not automatically started, you will need to unquiesce the subtree and restart the queues for each of the servers. See Quiescing the subtree and Managing queues for information on how to do those tasks.



[ Top of Page | Previous Page | Next Page | Contents | Index ]