LDAP Architecture

Admin Article

Article Information

This article applies to the following ZCS versions.

ZCS 7.0 Article ZCS 7.0 ZCS 8.0 Article ZCS 8.0


Zimbra LDAP Architecture

Zimbra uses OpenLDAP as one of its primary datastores. The LDAP database is used to store a wide variety of data, including but not limited to:

* Server configuration pieces
* Software configuration pieces (Jetty, Postfix, OpenDKIM, Amavis, ClamAV, etc)
* User data
* COS data

OpenLDAP Internals

OpenLDAP and BDB (ZCS7 and previous)

In ZCS 7 and prior releases, OpenLDAP uses Berkeley Database (BDB) as the storage engine. OpenLDAP has two database backends that rely on BDB, back-bdb and back-hdb. Zimbra uses the back-hdb backend due to its superior performance profile to back-bdb. There are a number of tuning pieces necessary to get optimal performance when using either back-bdb or back-hdb. Detailed specifics on tuning are documented at [OpenLDAP performance tuning for ZCS 7]. Here we will give an overview of the different pieces.

OpenLDAP Caches

Unfortunately, reading data directly out of the BDB database is quite slow. To work around this limitation, the OpenLDAP server process has 3 caches per BDB database that can be configured to hold data directly in memory while the process is running so that they do not have to be constantly pulled out of the BDB database. The larger the settings, the greater the total memory requirements for the slapd process. Caches release entries based on the [CLOCK Algorithm] These caches are:

  • Entry cache -- This caches full entries up to the configured maximum in memory
  • IDL cache -- This caches the results of the most frequent indexed queries in memory
  • DN cache -- This caches the entry DNs for the entries in the database in memory, helping to speed up dn2id queries. This should be left at unlimited if at all possible.
  • Cache free -- This setting determines how many entries will be freed from a cache if its maximum size is smaller than the total possible number of entries.

BDB Cache

BDB operates with a BDB specific caching layer between the database and any application using BDB. This caching layer can either be on-disk or stored in memory via [Shared Memory]. For optimal performance, it is recommended to use a shared memory. This setting has the single greatest impact on OpenLDAP performance. It is highly recommended that the BDB cache be larger than the size of the DB so it can be fully contained in memory.

OpenLDAP database storage format with BDB

When an entry is stored in OpenLDAP, it is broken down into multiple parts inside the BDB database. These parts are:

  • The dn2id.bdb database. This database maps an entry DN to a unique identifier.
  • The id2entry.bdb database. This database contains the entries stored by unique identifier.
  • Index databases. The number of these databases depends on what indices have been configured inside of OpenLDAP. There is one index database per indexed attribute. The index database is queried when an ldap search contains an indexed attribute so that the entire id2entry database does not have to be processed for results.

OpenLDAP and LMDB (ZCS8 and later)

With ZCS8 and later, Zimbra uses the back-mdb backend by default. This backend uses the LMDB database for storage. LMDB is a memory-mapped database, which allows for substantially superior performance over BDB. One of the benefits of the performance improvement is that there no longer needs to be multiple caches kept in slapd as there was with the BDB based backends. This substantially lower the overall memory requirements when running OpenLDAP.

Database storage format with LMDB

With LMDB, the database is stored in a single file named "data.mdb". It contains multiple sub databases similar to how data was stored with the BDB backend. These sub databases are:

  • dn2i -- dns to Ids database
  • id2e -- Id to entry database
  • ad2i -- Attribute descriptions to ID database
  • Index databases. The number of these databases depends on what indices have been configured inside of OpenLDAP. There is one index database per indexed attribute. The index database is queried when an ldap search contains an indexed attribute so that the entire id2entry database does not have to be processed for results.

LDAP and Authentication

By default, Zimbra authenticates users against their user entry stored in LDAP via a custom Zimbra Authentication module. However, it is possible to configure Zimbra to authenticate users through an external directory server instead. Authentication is done any time it is necessary to validate the identity of the user. Some locations requiring authentication:

  • Web client login
  • POP connections
  • IMAP connections
  • SMTP(S) connections

OpenLDAP and Nginx

In the majority of installations, Nginx does not access LDAP directly. However, when cert auth or a SASL mechanism such as GSSAPI are used, nginx will auth against LDAP to log into the upstream server.

OpenLDAP and the MTA

OpenLDAP and Postfix

Postfix uses the OpenLDAP server extensively. All email going through postfix results in multiple queries being made to the LDAP server to determine delivery destination(s) for the given email.

Example Postfix LDAP query obtaining the delivery transport:

  • SRCH base="" scope=2 deref=0 filter="(&(|(zimbraMailDeliveryAddress=abcd@example.com)(zimbraDomainName=abcd@example.com))(zimbraMailStatus=enabled))"

OpenLDAP and Amavis

Amavis queries the LDAP server during every email delivery to look up information such as banned users and whitelisted users for use in scoring the email for delivery.

Example Amavis query:

  • SRCH base="" scope=2 deref=2 filter="(&(objectClass=amavisAccount)(zimbraMailStatus=enabled)(|(|(mail=abcd@example.com)(mail=@example.com)(mail=@.example.com)(mail=example.com)(mail=@.com)(mail=com)(mail=@.))(|(zimbraDomainName=abcd@example.com)(zimbraDomainName=@example.com)(zimbraDomainName=@.example.com)(zimbraDomainName=example.com)(zimbraDomainName=@.com)(zimbraDomainName=com)(zimbraDomainName=@.))))"

OpenLDAP and OpenDKIM (ZCS8 and later)

OpenDKIM queries the LDAP server on outgoing emails to determine if DKIM signing is enabled for the sending domain. If signing is enabled, it grabs the signing key information from the LDAP server as well. Example OpenDKIM query:

  • SRCH base="" scope=2 deref=0 filter="(DKIMIdentity=zimbra.com)"
  • SRCH attr=DKIMSelector
  • SEARCH RESULT tag=101 err=0 nentries=1 text=
  • SRCH base="" scope=2 deref=0 filter="(DKIMSelector=C2AA288C-EE47-11E2-9BB0-E820BDD9BDBF)"
  • SRCH attr=DKIMDomain DKIMSelector DKIMKey

OpenLDAP and the Mailbox Store Servers

Significant amounts of the information the mailstore uses is stored in the LDAP server. This includes:

  • Domain information
  • Zimlet information
  • User information
  • COS information
  • Server information

The store has local caches for all of this information. By default, the caches expire after 15 minutes, at which point the data is re-fetched from the LDAP servers.

Example user information stored in LDAP that is used by the mailstore:

  • Zimlet preferences
  • Sieve rules
  • Signatures
  • Personas
  • All UI preferences

For users, the account object is fetched at authentication time and added to the mailstore account cache.

Interface methods

Perl

Various Zimbra utilities and Amavis use perl for interfacing with the LDAP server. These use either

  • Net::LDAP -- A pure perl LDAP module implementation
  • Net::LDAPapi -- A wrapper for the C LDAP API for Perl

JNDI (ZCS7 and previous)

Zimbra Java based applications, such as the mailstore, access LDAP via JNDI in ZCS7 and earlier. This method was dropped for ZCS8 due to the lack of ongoing development and the presence of major bugs that Sun (and later Oracle) declined to fix. In addition, Zimbra depends heavily on using connection pools with Java, so that persistent connections are maintained between the store server and the LDAP servers (reducing lookup time). With JNDI, it is not possible to use TLS secured connection pools, so clients either have to disable TLS for JNDI, or suffer reduced performance due to JNDI having to individually bind to the LDAP server for every lookup.

UnboundSDK (ZCS8 and later)

Zimbra Java based applications, such as the mailstore, access LDAP via the UnboundID SDK for Java in ZCS8 and later. The UnboundID SDK is written by former Sun engineers who worked on the Sun LDAP server, so they have a significant conceptual understanding of how LDAP works and how to properly design an SDK for accessing LDAP. A capability the UnboundID SDK allows that is missing from JNDI is the ability to use connection pools over startTLS (secure LDAP). This significantly reduces operational overhead by allowing a pool of persistent connections to be kept open between the store and the LDAP server.

C LDAP API

Various 3rd party software used by Zimbra accesses LDAP via the C LDAP API. These include:

  • Postfix
  • OpenDKIM


LDAP Design Considerations with Zimbra

As LDAP is highly utilized by Zimbra, it is considered best practice to have multiple servers so that the load can be more evenly distributed between them. With Zimbra, the ldap_url localconfig key controls the connection and fallback order that clients make for read-only lookup information. In ZCS8 and later, with the introduction of multi-master replication, the ldap_master_url key controls the connection and fallback order among multiple masters.

Best practices using ldap_url for fallback

In general, a client will connect to the first server listed in ldap_url. If this server goes down or is not online, the client will then fail over to the next entry present in ldap_url. It is best to distribute load among the ldap servers by listing the servers in different order between various servers. For example, if there were 3 ldap servers with names ldap1.example.com, ldap2.example.com, and ldap3.example.com, load can be distributed fairly simply. Example:

1st set of clients: ldap_url=ldap://ldap1.example.com ldap://ldap2.example.com ldap://ldap3.example.com
2nd set of clients: ldap_url=ldap://ldap2.example.com ldap://ldap3.example.com ldap://ldap1.example.com
3rd set of clients: ldap_url=ldap://ldap3.example.com ldap://ldap1.example.com ldap://ldap2.example.com

Controlling LDAP fail over behavior of the Java clients

Zimbra has two ldap keys that control how the Java client determines if and when it should fail over to the next host in the ldap_url list. These are both stored in localconfig:

  • ldap_connect_timeout -- Determines how long Java clients should allow it to take for a fresh connection to be made to the LDAP server. Defaults to 30 seconds
  • ldap_read_timeout -- Determines how long Java clients should wait for a response to a search query. Defaults to 0 (forever). Note: Due to the deficiencies of JNDI, setting this to something other than zero may have unexpected results in ZCS7 and previous.

Best practices when using a Load Balancer

Another way to handle the load distribution is to place a load balancer in front of the LDAP server pool, and then present it as a single DNS name to the clients. In this case, you would simply set the ldap_url to the DNS name on all the servers:

ldap_url=ldap://dnsname.example.com

Multi-Master considerations (ZCS8 and later)

Verified Against: ZCS 7.0 ZCS 8.0 Date Created: 11/21/2013
Article ID: https://wiki.zimbra.com/index.php?title=LDAP_Architecture Date Modified: 2015-03-31



Try Zimbra

Try Zimbra Collaboration with a 60-day free trial.
Get it now »

Want to get involved?

You can contribute in the Community, Wiki, Code, or development of Zimlets.
Find out more. »

Looking for a Video?

Visit our YouTube channel to get the latest webinars, technology news, product overviews, and so much more.
Go to the YouTube channel »

Jump to: navigation, search