1.6. Creating Service Principals and Keytab Files for Hadoop

Each service and sub-service in Hadoop must have its own principal. A principal name in a given realm consists of a primary name and an instance name, which in this case is the FQDN of the host that runs that service. As services do not login with a password to acquire their tickets, their principal's authentication credentials are stored in a keytab file, which is extracted from the Kerberos database and stored locally with the service principal on the service component host.

First you must create the principal, using mandatory naming conventions.

Then you must create the keytab file with that principal's information and copy the file to the keytab directory on the appropriate service host.

[Note] Note

Principals can be created either on the KDC machine itself or through the network, using an “admin” principal. The following instructions assume you are using the KDC machine and using the kadmin.local command line administration utility. Using kadmin.local on the KDC machine allows you to create principals without needing to create a separate "admin" principal before you start.

  1. Open the kadmin.local utility on the KDC machine

  2. Create the service principals:

    addprinc -randkey $primary_name/$fully.qualified.domain.name@EXAMPLE.COM

    The randkey is used to generate the password.

    Note that in the example each service principal's primary name has appended to it the instance name, the FQDN of the host on which it runs. This provides a unique principal name for services that run on multiple hosts, like DataNodes and TaskTrackers. The addition of the hostname serves to distinguish, for example, a request from DataNode A from a request from DataNode B. This is important for two reasons:

    • If the Kerberos credentials for one DataNode are compromised, it does not automatically lead to all DataNodes being compromised

    • If multiple DataNodes have exactly the same principal and are simultaneously connecting to the NameNode, and if the Kerberos authenticator being sent happens to have same timestamp, then the authentication would be rejected as a replay request.

    The primary_name part of the name must match the values in the table below:


    Table 13.2. Service Principals

    Service Component Mandatory Principal Name





    NameNode HTTP


    HDFS SecondaryNameNode nn/$FQDN
    HDFS SecondaryNameNode HTTP HTTP/$FQDN




    MapReduce JobTracker jt/$FQDN
    MapReduce TaskTracker tt/$FQDN


    Oozie Server



    Oozie HTTP



    Hive Metastore



    Hive WebHCat











    Nagios Server Nagios nagios/$FQDN

    For example: To create the principal for a DataNode service, issue this command:

    addprinc -randkey dn/$DataNode-Host@EXAMPLE.COM 
  3. In addition you must create three special principals for Ambari's own use. These principals do not need the FQDN appended to the primary name:


    Table 13.3. Ambari Principals

    User Mandatory Principal Name
    Ambari Smoke Test User ambari-user
    Ambari HDFS Test User hdfs
    Ambari HBase Test User hbase

  4. Once the principals are created in the database, you can extract the related keytab files for transfer to the appropriate host:

    xst -norandkey -k $keytab_file_name $primary_name/fully.qualified.domain.name@EXAMPLE.COM

    You must use the mandatory names for the $keytab_file_name variable shown in this table.


    Table 13.4. Service Keytab File Names

    Component Principal Name Mandatory Keytab File Name


    nn/$FQDN nn.service.keytab
    NameNode HTTP HTTP/$FQDN spnego.service.keytab


    nn/$FQDN nn.service.keytab
    SecondaryNameNode HTTP HTTP/$FQDN spnego.service.keytab
    DataNode dn/$FQDN dn.service.keytab







    Oozie Server oozie/$FQDN oozie.service.keytab
    Oozie HTTP HTTP/$FQDN spnego.service.keytab

    Hive Metastore




    WebHCat HTTP/$FQDN spnego.service.keytab

    HBase Master Server



    HBase RegionServer






    Nagios Server nagios/$FQDN nagios.service.keytab
    Ambari Smoke Test User ambari-user smokeuser.headless.keytab
    Ambari HDFS Test User hdfs hdfs.headless.keytab
    Ambari HBase Test User hbase hbase.headless.keytab

    For example: To create the keytab files for NameNode HTTP, issue this command:

    xst -k spnego.service.keytab HTTP/<namenode-host>
    [Note] Note

    If you have a large cluster, you may want to create a script to automate creating your principals and keytabs. To help with that, you can download a CSV-formatted file of all the required principal names and keytab files from the Ambari Web GUI. Select Admin view->Security->Enable Security-> and run the Add security wizard, using the default values. At the bottom of the third page, Create Principals and Keytabs, click Download CSV. Then use the Back button to exit the wizard until you have finished your setup.

  5. When the keytab files have been created, on each host create a directory for them and set appropriate permissions.

    mkdir -p /etc/security/keytabs/
    chown root:hadoop /etc/security/keytabs
    chmod 750 /etc/security/keytabs
  6. Copy the appropriate keytab file to each host. If a host runs more than one component (for example, both TaskTracker and DataNode), copy keytabs for both components. The Ambari Test User keytabs should be copied to the NameNode host.

  7. Set appropriate permissions for the keytabs.

    1. On the HDFS NameNode and SecondaryNameNode hosts:

      chown hdfs:hadoop /etc/security/keytabs/nn.service.keytab
      chmod 400 /etc/security/keytabs/nn.service.keytab
      chown root:hadoop /etc/security/keytabs/spnego.service.keytab 
      chmod 440 /etc/security/keytabs/spnego.service.keytab

    2. On the HDFS NameNode host, for the Ambari Test Users:

      chown ambari-qa:hadoop /etc/security/keytabs/smokeuser.headless.keytab
      chmod 440 /etc/security/keytabs/smokeuser.headless.keytab
      chown hdfs:hadoop /etc/security/keytabs/hdfs.headless.keytab
      chmod 440 /etc/security/keytabs/hdfs.headless.keytab
      chown hbase:hadoop /etc/security/keytabs/hbase.headless.keytab
      chmod 440 /etc/security/keytabs/hbase.headless.keytab
    3. On each host that runs an HDFS DataNode:

      chown hdfs:hadoop /etc/security/keytabs/dn.service.keytab
      chmod 400 /etc/security/keytabs/dn.service.keytab

    4. On the host that runs the MapReduce JobTracker:

      chown mapred:hadoop /etc/security/keytabs/jt.service.keytab
      chmod 400 /etc/security/keytabs/jt.service.keytab
    5. On each host that runs a MapReduce TaskTracker:

      chown mapred:hadoop /etc/security/keytabs/tt.service.keytab
      chmod 400 /etc/security/keytabs/tt.service.keytab
    6. On the host that runs the Oozie Server:

      chown oozie:hadoop /etc/security/keytabs/oozie.service.keytab
      chmod 400 /etc/security/keytabs/oozie.service.keytab
      chown root:hadoop /etc/security/keytabs/spnego.service.keytab 
      chmod 440 /etc/security/keytabs/spnego.service.keytab
    7. On the host that runs the Hive Metastore, HiveServer2 and WebHCat:

      chown hive:hadoop /etc/security/keytabs/hive.service.keytab
      chmod 400 /etc/security/keytabs/hive.service.keytab
      chown root:hadoop /etc/security/keytabs/spnego.service.keytab 
      chmod 440 /etc/security/keytabs/spnego.service.keytab
    8. On hosts that run the HBase MasterServer, RegionServer and ZooKeeper:

      chown hbase:hadoop /etc/security/keytabs/hbase.service.keytab
      chmod 400 /etc/security/keytabs/hbase.service.keytab
      chown zookeeper:hadoop /etc/security/keytabs/zk.service.keytab 
      chmod 400 /etc/security/keytabs/zk.service.keytab
    9. On the host that runs the Nagios server:

      chown nagios:nagios /etc/security/keytabs/nagios.service.keytab
      chmod 400 /etc/security/keytabs/nagios.service.keytab

  8. Verify that the correct keytab files and principals are associated with the correct service using the klist command. For example, on the NameNode:

    klist –k -t /etc/security/nn.service.keytab

    Do this on each respective service in your cluster.

loading table of contents...