Difference between revisions of "Projects/HDBBridge"
(→Background) |
(→Open issues) |
||
Line 162: | Line 162: | ||
* When marshalling the KDC_REQ to pass to the windc client_access method, the following fields are not marshalled: padata, enc_authorization_data and additional_tickets. |
* When marshalling the KDC_REQ to pass to the windc client_access method, the following fields are not marshalled: padata, enc_authorization_data and additional_tickets. |
||
+ | * A future direction might be to implement an SPI based on draft-ietf-krb-wg-kdc-model, and glue that to both KDB and HDB. |
||
==Status== |
==Status== |
Revision as of 08:30, 19 October 2009
Contents
Background
This project extends MIT Kerberos with the capability to dynamically load Heimdal database (HDB) backends. The intent is twofold:
- allow applications with complex HDB backends, such as Samba4, to use the MIT KDC without porting to KDB
- allow kdb5_util to dump a Heimdal database for migration to a native MIT KDB backend
Additionally, a customer may choose to run the MIT KDC with a Heimdal backend as an interim measure to test compatibility before a full migration. Using, for example, Heimdal's LDAP backend it would be possible for a realm to contain mixed KDCs sharing the same data.
Architecture
A new KDB database plugin, HDB, acts as a bridge between KDB and HDB. Upon instantiation, in dynamically loads Heimdal's HDB library and maps KDB methods to their HDB equivalents. Whilst there is write support, it is anticipated this will typically be used in a read-only environment, as the two information models do not completely map.
The bridge also has the ability to bridge policy checking and authorization data signing methods to Heimdal's windc plugin SPI.
Configuration
A new module should be defined in the [dbmodules] section of krb5.conf:
HDB = { db_library = hdb heimdal_libdir = /usr/local/heimdal/lib }
heimdal_libdir should refer to the directory in which Heimdal's libkrb5.so and libhdb.so can be found (and, further, any windc plugins if present). A further option, heimdal_dbname, specifies the HDB database backend and/or database name to load. If this option is absent, the default backend is loaded. Once this is defined, it can be referred to the [realms] section, for example:
HEIMDAL.EXAMPLE.ORG = { kdc = foo admin_server = foo database_module = HDB }
Implementation
Code is in plugins/kdb/hdb. Because the bridge needs to dynamically load Heimdal libraries anyway, there is no support for building the bridge statically. The platform needs to support RTLD_LOCAL (or equivalent), otherwise there will be symbol conflicts between the two Kerberos implementations.
hdb
One interesting issue is support for master keys. Both Kerberos implementations are similar conceptually, however the interface for reading master keys is not exposed by libhdb and the encryption algorithms differ. This has the following implications:
- the dbekd_decrypt_key_data and dbekd_encrypt_key_data implementations by default forward to hdb_unseal_key and hdb_seal_key, respectively
- methods to return a master key return an empty key with ENCTYPE_UNKNOWN, on the presumption this is preferable to poking inside internal Heimdal data structures
- when dumping a Heimdal database with kdb5_util, the -mkey_convert option must be specified; without this the resulting output is useless
- as a special case to support the above, when the dbekd_encrypt_key_data method is called with a non-ENCTYPE_UNKNOWN master key, the default MIT implementation is used
The original HDB entry is stored in the e_data field of the KDB entry, to enable methods that further interact with Heimdal APIs to use the original entry. (For example, the SIGN_AUTH_DATA KDB method will pass the original HDB entry to the windc plugin.)
kdb
The HDB bridge does not implement all DAL methods (for example, the policy inquiry ones). For those that do not have default implementations provided by libkdb, the dispatch code has been modified to return KRB5_KDB_DBTYPE_NOSUP if a method is unimplemented. (In some cases, such as iteration methods, 0 is returned instead to indicate no results.)
KDC
Some minor changes to the KDC were necessary to accommodate information model differences.
- the windc client_access() method can return e_data which must be propagated to the client; thus, the policy checking methods were enhanced to support this
- when signing the PAC, it is preferable to have access to the TGS key rather than fetching it again; hence, this is added to SIGN_AUTH_DATA
- for HDB principals, max_life is optional, so the KDC logic now mirrors Heimdal by ignoring zero values of max_life:
life ::= tgt.endtime - ticket.starttime if (client.max_life != 0) life ::= min(life, client.max_life) if (server.max_life != 0) life ::= min(life, server.max_life) if (max_life_for_realm != 0) life ::= min(life, max_life_for_realm) ticket.endtime ::= ticket.starttime + life
extensions
HDB extensions roughly map to TL data, although we only support the intersection of both sets. Presently this amounts to the modification time and the constrained delegation ACL (which is checked against the original HDB entry).
The marshalling of HDB extensions is dispatched via kh_hdb_extension_vtable, so adding support for new extensions is easy: simply add a marshal and unmarshal callback with the following signatures:
typedef krb5_error_code (*kh_hdb_marshal_extension_fn)(krb5_context, const krb5_db_entry *, HDB_extension *); typedef krb5_error_code (*kh_hdb_unmarshal_extension_fn)(krb5_context, HDB_extension *, krb5_db_entry *);
windc
In addition to HDB, Heimdal support a "windc" plugin that implements methods for MS PAC generation, signing, as well as AS-REQ authorization. We could have wrapped the former inside an authdata plugin, but in order to support the latter, all methods are in the HDB bridge. The windc shim is loaded only when the backend is opened with KRB5_KDB_SRV_TYPE_KDC usage.
A windc plugin exposes the following methods:
- pac_generate
- pac_verify
- client_access
The first two are handled by the SIGN_AUTH_DATA KDB invoke method; the latter by CHECK_POLICY_AS.
SIGN_AUTH_DATA
Simplified pseudo-code follows (refer to the actual code for details, as there are some special cases to deal with constrained delegation, retrieving the correct TGS key, the inbuilt vs. the windc plugin's pac_verify functions, etc).
sign_auth_data() { if (!is_as_req) pac ::= find existing authdata from TGT if ((is_as_req && (flags & INCLUDE_PAC)) || (pac == null && client != null)) { pac ::= pac_generate() } else { pac_verify(pac) } pac_sign(pac) encode_authdata_container(pac) }
CHECK_POLICY_AS
Simplified pseudo-code follows. The bulk of the actual implementation is concerned with marshalling MIT to Heimdal data structures.
check_policy_as() { client_access() }
CHECK_ALLOWED_TO_DELEGATE
Strictly, this method is not related to the windc plugin; it is implemented by referring to the constrained delegation ACL HDB extension. However, as constrained delegation is presently only useful in a Windows environment, it is included in this section. Simplified pseudo-code follows:
check_allowed_to_delegate { foreach extension in extension_data.allowed_to_delegate_to { if (proxy == extension.principal) { return 0 } } return KDC_ERR_POLICY }
Open issues
- When marshalling the KDC_REQ to pass to the windc client_access method, the following fields are not marshalled: padata, enc_authorization_data and additional_tickets.
- A future direction might be to implement an SPI based on draft-ietf-krb-wg-kdc-model, and glue that to both KDB and HDB.
Status
Code is in the users/lhoward/heimmig branch. Presently I have only tested with the HDB flat file backend.