Tue, 26 Aug 2025 08:51:12 -0500
These guidelines cover the identity proofing, authentication, and federation of users (e.g., employees, contractors, or private individuals) who interact with government information systems over networks. They define technical requirements in each of the areas of identity proofing, enrollment, authenticators, management processes, authentication protocols, federation, and related assertions. They also offer technical recommendations and other informative text as helpful suggestions. The guidelines are not intended to constrain the development or use of standards outside of this purpose. This publication supersedes NIST Special Publication (SP) 800-63-3.
assertions; authentication; authentication assurance; authenticator; credential service provider; digital authentication; identity proofing; federation; passwords; PKI.
This publication and its companion volumes — [SP800-63A], [SP800-63B], and [SP800-63C] — provide technical and process guidelines to organizations for the implementation of digital identity services.
This section is informative.
The rapid proliferation of online services over the past few years has heightened the need for reliable, secure, and privacy-protective digital identity solutions. A digital identity is always unique in the context of an online service. However, a person could have multiple digital identities and, while a digital identity could relay a unique and specific meaning within the context of an online service, the real-life identity of the individual behind the digital identity might not be known. When confidence in a person’s real-life identity is not required to provide access to an online service, organizations can use anonymous or pseudonymous accounts. In all other use cases, a digital identity is intended to establish trust between the holder of the digital identity and the person, organization, or system interacting with the online service. However, this process can present challenges. There are multiple opportunities for mistakes, miscommunication, and attacks that fraudulently claim another person’s identity. Additionally, given the broad range of individual needs, constraints, capacities, and preferences, online services must be designed with flexibility and customer experience in mind to support broad and enduring participation and access to online services.
Digital identity risks are dynamic and exist along a continuum. Consequently, a digital identity risk management approach should seek to manage risks using outcome-based approaches that are designed to meet the organization’s unique needs. These guidelines define specific assurance levels that operate as baseline control sets. These assurance levels provide multiple benefits, including a starting point for organizations in their risk management journey and a common structure for supporting interoperability between different entities. It is, however, impractical to create assurance levels that can comprehensively address the entire spectrum of risks, threats, or considerations that an organization will face when deploying an identity solution. For this reason, these guidelines promote a risk-based approach to digital identity solution implementation rather than a compliance-oriented approach, and organizations are encouraged to tailor their control implementations based on the processes defined in these guidelines.
Additionally, risks associated with digital identity stretch beyond the potential impacts to the organization providing online services. These guidelines endeavor to robustly and explicitly account for risks to individuals, communities, and other organizations. Organizations should also consider how digital identity decisions might affect, or need to accommodate, the individuals who interact with the organization’s programs and services. Privacy and customer experience for individuals should be considered along with security. Additionally, organizations should consider their digital identity approach alongside other mechanisms for identity management, such as those used in call centers and in-person interactions. By taking a customer-centric and continuously informed approach to mission delivery, organizations have an opportunity to incrementally build trust with the populations they serve, improve customer experience, identify issues more quickly, and provide individuals with appropriate and effective redress options.
The composition, models, and availability of identity services have significantly changed since the first version of SP 800-63 was released, as have the considerations and challenges of deploying secure, private, and usable services to users. This revision addresses these challenges by presenting guidance and requirements based on the roles and functions that entities perform as part of the overall digital identity model.
Additionally, this publication provides instruction for credential service providers (CSPs), verifiers, and relying parties (RPs), to supplement the NIST Risk Management Framework [NISTRMF] and its component publications. It describes the risk management processes that organizations should follow to implement digital identity services and expands upon the NIST RMF by outlining how customer experience considerations should be incorporated. It also highlights the importance of considering impacts on enterprise operations and assets, individuals, and other organizations. Furthermore, digital identity management processes for identity proofing, authentication, and federation typically involve processing personal information, which can present privacy risks. Therefore, these guidelines include privacy requirements and considerations to help mitigate potential associated risks.
Finally, while these guidelines provide organizations with technical requirements and recommendations for establishing, maintaining, and authenticating the digital identity of subjects who access digital systems over a network, they also recommend integration with systems and processes that are often outside of the control of identity and IT teams. As such, these guidelines provide considerations to improve coordination with organizations and deliver more effective, modern, and customer-driven online services.
These guidelines applies to all online services for which some level of assurance in a digital identity is required, regardless of the constituency (e.g., the public, business partners, and government employees and contractors). For this publication, “person” refers only to natural persons.
These guidelines primarily focus on organizational services that interact with external users, such as individuals accessing public benefits or private-sector partners accessing collaboration spaces. However, they also apply to federal systems accessed by employees and contractors. The Personal Identity Verification (PIV) of Federal Employees and Contractors standard [FIPS201], and its corresponding set of Special Publications and organization-specific instructions, extend these guidelines for the federal enterprise by providing additional technical controls and processes for issuing and managing Personal Identity Verification (PIV) Cards, binding additional authenticators as derived PIV credentials, and using federation architectures and protocols with PIV systems.
Online services not covered by these guidelines include those associated with national security systems as defined in [44 U.S.C. § 3552(b)(6)]. Private-sector organizations and state, local, and tribal governments whose digital processes require varying levels of digital identity assurance may consider the use of these standards where appropriate.
These guidelines address logical access to online systems, services, and applications. They do not specifically address physical access control processes. However, the processes specified in these guidelines can be applied to physical access use cases where appropriate. Additionally, these guidelines do not explicitly address some subjects including, but not limited to, machine-to-machine authentication, interconnected devices (e.g., Internet of Things [IoT] devices), or access to Application Programming Interfaces (APIs) on behalf of subjects.
These guidelines support the mitigation of the negative impacts of errors that occur during the functions of identity proofing, authentication, and federation. Section 3, Digital Identity Risk Management, describes the risk assessment process and how the results of the risk assessment and additional context inform the selection of controls to secure the identity proofing, authentication, and federation processes. Controls are selected by determining the assurance level required to mitigate each applicable type of digital identity error for a particular service based on risk and mission.
Specifically, organizations are required to select an assurance level1 for each of the following functions:
SP 800-63 is organized as the following suite of volumes:
Effective enterprise risk management is multidisciplinary by design and involves the consideration of varied sets of factors and expectations. In a digital identity risk management context, these factors include, but are not limited to, information security, fraud, privacy, and customer experience. It is important for risk management efforts to weigh these factors as they relate to enterprise assets and operations, individuals, and other organizations.
During the process of analyzing factors that are relevant to digital identity, organizations might determine that measures outside of those specified in this publication are appropriate in certain contexts (e.g., where privacy or other legal requirements exist or where the output of a risk assessment leads the organization to determine that additional measures or alternative procedural safeguards are appropriate). Organizations, including federal agencies, can employ compensating or supplemental controls that are not specified in this publication. They can also consider partitioning the functionality of an online service to allow less sensitive functions to be available at a lower level of assurance to improve access without compromising security.
The considerations detailed below support enterprise risk management efforts and encourage informed and customer-centered service delivery. While this list of considerations is not exhaustive, it highlights a set of cross-cutting factors that are likely to impact decision-making associated with digital identity management.
It is increasingly important for organizations to assess and manage digital identity security risks, such as unauthorized access due to impersonation. As organizations consult these guidelines, they should consider potential impacts to the confidentiality, integrity, and availability of information and information systems that they manage, and that their service providers and business partners manage, on behalf of the individuals and communities that they serve.
Federal agencies implementing these guidelines are required to meet statutory responsibilities, including those under the Federal Information Security Modernization Act (FISMA) of 2014 [FISMA] and related NIST standards and guidelines. NIST recommends that non-federal organizations implementing these guidelines follow comparable standards (e.g., ISO/IEC 27001) to ensure the secure operation of their digital systems.
FISMA requires federal agencies to implement appropriate controls to protect federal information and information systems from unauthorized access, use, disclosure, disruption, or modification. The NIST RMF [NISTRMF] provides a process that integrates security, privacy, and cyber supply chain risk management activities into the system development life cycle. It is expected that federal agencies and organizations that provide services under these guidelines have already implemented the controls and processes required under FISMA and associated NIST risk management processes and publications.
The controls and requirements encompassed by the identity, authentication, and federation assurance levels under these guidelines augment but do not replace or alter the information and information system controls determined under FISMA and the RMF.
As threats evolve, it is important for organizations to assess and manage identity-related fraud risks associated with identity proofing and authentication processes. As organizations consult these guidelines, they should consider the evolving threat environment, the availability of innovative anti-fraud measures in the digital identity market, and the potential impacts of identity-related fraud on their systems and users. This is particularly important for public-facing online services where the impact of identity-related fraud on digital government service delivery, public trust, and organization reputation can be substantial.
This version enhances measures to combat identity theft and identity-related fraud by repurposing IAL1 as a new assurance level, updating authentication risk and threat models to account for new attacks, providing new options for phishing-resistant authentication, introducing requirements to prevent automated attacks against enrollment processes, and preparing for new technologies (e.g., mobile driver’s licenses and verifiable credentials) that can leverage strong identity proofing and authentication.
When designing, implementing, and managing digital identity systems, it is imperative to consider the potential of that system to create privacy-related problems for individuals when processing (e.g., collection, storage, use, and destruction) personal information and the potential impacts of problematic data actions. If a breach of personal information or a release of sensitive information occurs, organizations need to ensure that the privacy notices describe, in plain language, what information was improperly released and, if known, how the information was exploited.
Organizations need to demonstrate how organizational privacy policies and system privacy requirements have been implemented in their systems. These guidelines recommend that organizations take steps to implement digital identity risk management with privacy in mind, which can be supported by referencing:
Furthermore, each volume of SP 800-63 contains a specific section that provides detailed privacy guidance and considerations for implementing the processes, controls, and requirements presented in that volume as well as normative requirements on data collection, retention, and minimization.
It is essential that these guidelines provide organizations with the ability to create modern, streamlined, and responsive customer experiences. To do this, the guidelines allow organizations to factor in the capabilities and expectations of users when making decisions and trade-offs in the risk management process. Organizations that implement these guidelines must understand their user populations, capabilities, and limitations as part of setting an effective digital identity risk management strategy.
There have been several major additions to these guidelines to ensure responsive and effective customer experiences. In addition to adding new technologies to each of the volumes, as applicable, this volume introduces two key concepts:
These two concepts are discussed in detail in Sec. 3 of this document.
As a part of improving customer experience, these guidelines also emphasize the need to provide options for users to “meet the customer where they are.” When coupled with a continuous improvement strategy and customer-centered design, this can help identify the opportunities, processes, business partners, and multi-channel identity proofing and service delivery methods that best support the needs of the populations that an organization serves.
Additionally, usability refers to the extent to which a system, product, or service can be used to achieve goals with effectiveness, efficiency, and satisfaction in a specified context of use. Usability supports the major objectives of customer experience, service delivery, and security, and requires an understanding of the people who interact with a digital identity system or process, as well as their unique capabilities and context of use.
Readers of this guideline should take a holistic approach to considering the interactions that each user will engage in throughout the process of enrolling in and authenticating to a service. Throughout the design and development of a digital identity system or process, it is important to conduct usability evaluations with representative users and perform realistic scenarios and tasks in appropriate contexts of use. Additionally, following usability guidelines and considerations can help organizations meet their customer experience goals. Digital identity management processes should be designed and implemented so that it is easy for users to do the right thing, hard to do the wrong thing, and easy to recover when the wrong thing happens.
\clearpage
This guideline uses the following typographical conventions in text:
This document is organized as follows. Each section is labeled as either normative (i.e., mandatory for compliance) or informative (i.e., not mandatory).
When described generically or bundled, these guidelines will refer to IAL, AAL, and FAL as xAL. Each xAL has three assurance levels. ↩
This section is informative.
These guidelines use digital identity models that reflect technologies and architectures that are currently available in the market. These models have a variety of entities and functions and vary in complexity. Simple models group functions (e.g., creating subscriber accounts, providing attributes) under a single entity. More complex models separate these functions among multiple entities.
The roles and functions found in these digital identity models include:
Subject: In these guidelines, a subject is a person and is represented by one of three roles, depending on where they are in the digital identity process.
Service provider: Service providers can perform any combination of functions involved in granting access to and delivering online services, such as a credential service provider, relying party, verifier, and identity provider.
Credential service provider (CSP): CSP functions include identity proofing applicants, enrolling them into their identity service, establishing subscriber accounts, and binding authenticators to those accounts. A subscriber account is the CSP’s established record of the subscriber, the subscriber’s attributes, and associated authenticators. CSP functions may be performed by an independent third party.
Relying party (RP): RPs provide online transactions and services and rely upon a verifier’s assertion of a subscriber’s identity to grant access to those services. When using federation, the RP accesses the information in the subscriber account through assertions from an identity provider (IdP).
Verifier: A verifier confirms the claimant’s identity by verifying the claimant’s possession and control of one or more authenticators using an authentication protocol. To do this, the verifier needs to confirm the binding of the authenticators with the subscriber account and check that the subscriber account is active.
Identity provider (IdP): When using federation, the IdP manages the subscriber’s primary authenticators and issues assertions derived from the subscriber account.
While presented as separate roles, the functions of the CSP, verifier, and IdP may be performed by a single entity or distributed across multiple entities, depending on the implementation (see Sec. 2.5).
[SP800-63A], Digital Identity Guidelines: Identity Proofing and Enrollment, provides general guidance information and normative requirements for the identity proofing and enrollment processes as well as IAL-specific requirements.
[SP800-63A] provides general information and normative requirements for the identity proofing and enrollment processes as well as requirements that are specific to IALs.
Figure 1 illustrates a common transaction sequence for the identity proofing and enrollment functions.
Identity proofing and enrollment begin when an applicant initiates identity proofing, often by attempting to access an online application served by the CSP. The CSP or its component service requests identity evidence and attributes from the applicant, which the applicant submits via an online or in-person transaction. The CSP resolves the user (i.e., uniquely distinguishes the user), validates the accuracy and authenticity of the evidence, and validates the accuracy of the attributes. If the applicant is successfully identity-proofed, they are enrolled in the identity service as a subscriber of that CSP. A unique subscriber account is then created, and one or more authenticators are registered to that account.
Subscribers have a responsibility to maintain control of their authenticators (e.g., guard against theft) and comply with CSP policies to remain in good standing with the CSP.
Fig. 1. Sample identity proofing and enrollment digital identity model
At the time of enrollment, the CSP establishes a subscriber account to uniquely identify each subscriber and record information about the subscriber and any authenticators bound to that subscriber account.
See Sec. 5 of [SP800-63A], subscriber accounts, for more information and normative requirements.
[SP800-63B], Authentication and Authenticator Management, provides normative descriptions of permitted authenticator types, their characteristics (e.g., phishing resistance), and authentication processes appropriate for each AAL.
An authenticator is a means of demonstrating the control or possession of one or more factors in an authentication protocol. These guidelines define three types of authentication factors used for authentication:
Single-factor authentication requires only one of the above factors, most often “something you know.” Multiple instances of the same factor still constitute single-factor authentication. For example, a user-generated PIN and a password do not constitute two factors as they are both “something you know.” Multi-factor authentication (MFA) refers to the use of more than one distinct factor.
This guideline specifies that authenticators always contain or comprise a secret. The secrets contained in an authenticator are based on either key pairs (i.e., asymmetric cryptographic keys) or shared secrets, including symmetric cryptographic keys, seeds for generating one-time passwords (OTP), and passwords. Asymmetric key pairs are comprised of a public key and a related private key. The private key is stored on the authenticator and is only available for use by the claimant who possesses and controls the authenticators. Symmetric keys generally are chosen at random, complex and long enough to thwart network-based guessing attacks, and stored in hardware or software that the subscriber controls.
Passwords used locally as an activation factor for a multi-factor authenticator are referred to as activation secrets. An activation secret is used to obtain access to a stored authentication key and remains within the authenticator and its associated user endpoint. An example of an activation secret would be the PIN used to activate a PIV card.
Biometric characteristics are unique, personal attributes that can be used to verify the identity of a person who is physically present at the point of authentication. This includes, facial features, fingerprints, and iris patterns, among others. While biometric characteristics cannot be used for single-factor authentication, they can be used as an authentication factor for multi-factor authentication in combination with a physical authenticator (i.e., something you have).
Some commonly used authentication methods do not contain or comprise secrets and are, therefore, not acceptable for use under these guidelines, such as:
The authentication process enables an RP to trust that a claimant is who they say they are to some level of assurance. The sample authentication process in Fig. 2 shows interactions between the RP, a claimant, and a verifier/CSP. The verifier is a functional role and is frequently implemented in combination with the CSP, RP, or both (as shown in Fig. 4).
Fig. 2. Sample authentication process
A successful authentication process demonstrates that the claimant has possession and control of one or more valid authenticators that are bound to the subscriber’s identity. In general, this is done using an authentication protocol that involves an interaction between the verifier and the claimant, where the claimant uses one or more authenticators to generate the authenticator output to be sent to the verifier. The verifier verifies the output and passes a positive result to the RP. The RP then opens an authenticated session with the verified subscriber.
The exact nature of the interaction is important in determining the overall security of the system. Well-designed protocols protect the integrity and confidentiality of communication between the claimant and the verifier both during and after the authentication and can help limit the damage done by an attacker masquerading as a legitimate verifier (i.e., phishing).
Normative requirements can be found in [SP800-63C], Federation and Assertions.
Section III of OMB [M-19-17], Enabling Mission Delivery through Improved Identity, Credential, and Access Management, directs agencies to support cross-government identity federation and interoperability. The term federation can be applied to several different approaches that involve the sharing of information between different trust domains, and may differ based on the kind of information that is being shared between the domains. These guidelines address the federation processes that allow for the conveyance of identity and authentication information based on trust agreements across a set of networked systems through federation assertions.
There are many benefits to using federated architectures including, but not limited to:
While the federation process is generally the preferred approach to authentication when the RP and IdP are not administered together under a common security domain, federation can also be applied within a single security domain for a variety of benefits, including centralized account management and technical integration.
These guidelines are agnostic to the identity proofing, authentication, and federation architectures that an organization selects, and they allow organizations to deploy a digital identity scheme according to their own requirements. However, there are scenarios in which federation could be more efficient and effective than establishing identity services that are local to the organization or individual applications, such as:
\clearpage
An organization might want to consider accepting federated identity attributes if any of the following apply:
The entities and interactions that comprise the non-federated digital identity model are illustrated in Fig. 3. The general-purpose federated digital identity model is illustrated in Fig. 4, and a federated digital identity model with a subscriber-controlled wallet is illustrated in Fig. 5.
In the two cases described in Fig. 3 and Fig. 4, the verifier does not always need to communicate in real time with the CSP to complete the authentication activity (e.g., digital certificates can be used). Therefore, the line between the verifier and the CSP represents a logical link between the two entities. In some implementations, the verifier, RP, and CSP functions are distributed. However, if these functions reside on the same platform, the interactions between the functions are signals between applications or application modules that run on the same system rather than using network protocols.
Fig. 3. Non-federated digital identity model example
Figure 3 shows an example of a common sequence of interactions in the non-federated model. Other sequences could also achieve the same functional requirements. One common sequence of interactions for identity proofing and enrollment activities is represented as follows:
Steps 3 through 5 can immediately follow steps 1 and 2 or be done at a later time. The usual sequence of interactions involved in using one or more authenticators to perform digital authentication in the non-federated model is as follows:
Fig. 4. Federated digital identity model example
Figure 4 shows an example of those same common interactions in a federated model.
The usual sequence of interactions involved in using one or more authenticators in the federated model to perform digital authentication is as follows:
Fig. 5. Federated Digital Identity Model With Subscriber-Controlled Wallet Example
Figure 5 shows an example of the interactions in a federated digital identity model in which the subscriber controls a device with software (i.e., a digital wallet) or an account with a cloud service provider (i.e., a hosted-wallet) that acts as the IdP. In the terminology of the “three-party model,” the CSP is the issuer, the IdP is the holder (i.e., the users device or agent operating on their behalf), and the RP is the verifier. In this model, it is common for the RP to establish a trust agreement with the CSP using a federation authority, as defined in Sec. 3.5 of [SP800-63C]. This arrangement allows the RP to accept assertions from the subscriber-controlled wallet without needing a direct trust relationship with the wallet, as described in Sec. 5 of [SP800-63C].
Other protocols and specifications often refer to attribute bundles as credentials. These guidelines use the term credentials to refer to a different concept. To avoid a conflict, the term attribute bundle is used within these guidelines. Normative requirements for attribute bundles can be found in Sec. 3.12.1 of [SP800-63C].
\clearpage
The usual sequence of interactions involved in providing an assertion to the RP from a subscriber-controlled wallet is as follows:
This section is normative.
This section describes the methodology for assessing digital identity risks associated with online services, including residual risks to users of the online service, the service provider organization, and its mission and business partners. It offers guidance on selecting usable, privacy-enhancing security, and anti-fraud controls that mitigate those risks. Additionally, it emphasizes the importance of continuously evaluating the performance of the selected controls.
The Digital Identity Risk Management (DIRM) process focuses on the identification and management of risks according to two dimensions: (1) risks that result from operating the online service that might be addressed by an identity system and (2) additional risks that are introduced as a result of implementing the identity system.
The first dimension of risk informs initial assurance level selections and seeks to identify risks associated with a compromise of the online service that might be addressed by an identity system. For example:
If there are risks associated with a compromise of the online service that could be addressed by an identity system, an initial assurance level is selected and the second dimension of risk is then considered.
The second dimension of risk seeks to identify the risks posed by the identity system itself and informs the tailoring process. Tailoring provides a process to modify an initially assessed assurance level, implement compensating or supplemental controls, or modify selected controls based on ongoing detailed risk assessments in areas such as privacy, usability, and resilience to real-world threats.
Examples of the types of impact that can result from risks introduced by the identity system itself include:
The outcomes of the DIRM process depend on the role that an entity plays within the digital identity model.
CSPs and IdPs are expected to offer services at assurance levels that are requested by the RPs they serve. However, CSPs and IdPs that choose to deviate from this guideline or augment their services are expected to conduct an abbreviated digital identity risk assessment and document their modifications in a Digital Identity Acceptance Statement that is provided to RPs (see Sec. 3.4.4).
This process augments the risk management processes required by [FISMA]. The results of the DIRM impact assessment for the online service may be different from the FISMA impact level for the underlying application or system. Identity process failures can result in different levels of impact for various user groups. For example, the overall assessed FISMA impact level for a payment system may result in a ‘FISMA Moderate’ impact category because sensitive financial data is being processed by the system. However, for individuals who are making guest payments where no persistent account is established, the authentication and proofing impact levels may be lower. Agency authorizing officials SHOULD require documentation that demonstrates adherence to the DIRM process as a part of the authority to operate (ATO) for the underlying information system that supports an online service. Agency authorizing officials SHOULD require documentation from CSPs that demonstrates adherence to the DIRM process as part of procurement or ATO processes for integration with CSPs.
These guidelines use the term FISMA impact level; other NIST RMF publications also use the term system impact level to refer to such impact categorization.
There are 5 steps in the DIRM process:
Fig. 6. High-level diagram of the DIRM Process Flow
Figure 6 illustrates the major actions and outcomes for each step of the DIRM process flow. While presented as a stepwise approach, there can be many points in the process that require divergence from the sequential order, including the need for iterative cycles between initial task execution and revisiting tasks. For example, the introduction of new regulations or requirements while an assessment is in progress may require organizations to revisit a step in the process. Additionally, new functionalities, changes in data usage, and changes to the threat environment may require an organization to revisit steps in the DIRM process at any point, including potentially modifying the assurance levels and/or the related controls of the online service.
Organizations SHOULD adapt and modify this overall approach to meet organizational processes, governance, and enterprise risk management practices. At a minimum, organizations SHALL execute and document each step and complete and document the normative mandates and outcomes of each step, regardless of any organization-specific processes or tools used in the overall DIRM process. Additionally, organizations SHOULD consult with a representative sample of the online service’s user population to inform the design and performance evaluation of the identity management system.
The purpose of defining the online service is to understand its functionality and establish a common understanding of its context, which will inform subsequent steps of the DIRM process. The role of the online service is contextualized as part of the broader business environment and associated processes, resulting in a documented description of the scope of the online service, user groups and their expectations, data processed, impacted entities, and other pertinent details.
RPs SHALL develop a description of the online service that includes, at minimum:
It is imperative to consider unexpected and undesirable impacts, as well as the scale of impact, on different entities that result from an unauthorized user gaining access to the online service due to a failure of the digital identity system. For example, if an attacker obtained unauthorized access to an online service that controls a power plant, the actions taken by the bad actor could have devastating environmental impacts on the local populations that live near the facility and cause power outages for the localities served by the plant.
It is important to differentiate between user groups and impacted entities, as described in this document. The online service will allow access to a set of users who may be partitioned into a few user groups based on the kind of functionality that is offered to that user group. For example, an online income tax filing and review service may have the following user groups: (1) citizens who need to check the status of their personal tax returns, (2) tax preparers who file tax returns on behalf of their clients, and (3) system administrators who assign privileges to different groups of users or create new user groups as needed. Impacted entities include all those who could face negative consequences in the event of a digital identity system failure. This will likely include members of the user groups but may also include those who never directly use the system.
Accordingly, the scope of impact assessments SHALL include individuals who use the online application as well as the organization itself. Additionally, organizations SHALL identify other entities (e.g., mission partners, communities, and those identified in [SP800-30]) that need to be specifically included based on mission and business needs. At a minimum, organizations SHALL document all impacted entities (both internal and external to the organization) when conducting their impact assessments.
The output of this step is a documented description of the online service, including a list of the user groups and other entities that are impacted by the functionality provided by the online service. This information will serve as a basis and establish the context for effectively applying the impact assessments detailed in the following sections.
This step of the DIRM process addresses the first dimension of risk by identifying the risks to the online service that might be addressed by an identity system.
The purpose of the initial impact assessment is to identify the potential adverse impacts of failures in identity proofing, authentication, and federation that are specific to an online service, yielding an initial set of assurance levels. RPs SHOULD consider historical data and results from user focus groups when performing this step. The impact assessment SHALL include:
The level of impact for each user group identified in Sec. 3.1 SHALL be considered separately based on the transactions available to that user group. This gives organizations maximum flexibility in selecting and implementing assurance levels that are appropriate for each user group. While impacts to user groups, the organization, and other entities are primary considerations for impact assessments, organizations SHOULD also consider scale (e.g., number of persons impacted by transactions).
The output of this assessment is a defined impact level (i.e., Low, Moderate, or High) for each user group. This serves as the primary input to the initial assurance level selection.
While an online service has a discrete set of users and user groups that authenticate to access the functionality provided by the service, there may be a much larger set of entities that are impacted when imposters and attackers obtain unauthorized access to the online service due to errors in identity proofing, authentication, or federation. In Sec. 3.1, such impacted entities are identified and documented as a part of defining the online service.
In this step, organizations identify the categories of impact that are applicable to the impacted entities for a given online service. At a minimum, organizations SHALL include the following impact categories in their impact assessments:
Organizations SHOULD include additional impact categories, as appropriate, based on their mission and business objectives. Each impact category SHALL be documented and consistently applied when implementing the DIRM process across different online services offered by the organization.
Harms refer to any adverse effects that would be experienced by an impacted entity. They provide a means to effectively understand the impact categories and how they may apply to specific entities impacted by the online service. For each impact category, organizations SHALL consider potential harms for each of the impacted entities identified in Sec. 3.1.
Examples of harms associated with each category include:
The outcome of this activity is a list of impact categories and harms that will be used to assess adverse consequences for impacted entities.
In this step, the organization assesses the potential level of impact caused by an unauthorized user gaining access to the online service for each of the impact categories selected by the organization (from Sec. 3.2.1). Impact levels are assigned using one of the following potential impact values:
Each user group can have a distinct set of privileges and functionalities through the online service. Hence, it is necessary to consider the adverse consequences for each set of impacted entities in each of the impact categories, as a result of an intruder obtaining unauthorized access as a member of a particular user group. To provide a more objective basis for impact level assignments, organizations SHOULD develop thresholds and examples for the impact levels for each impact category. Where this is done, particularly with specifically defined quantifiable values, these thresholds SHALL be documented and used consistently in the DIRM assessments across an organization to allow for a common understanding of risks.
Examples of potential impacts in each of the impact categories include:
This guideline provides three impact levels. However, organizations MAY define more granular impact levels and develop their own methodologies for their initial impact assessment activities.
The impact analysis considers the level of impact (i.e., Low, Moderate, or High) of compromises of any of the identity system functions (i.e., identity proofing, authentication, and federation) that results in an intruder obtaining unauthorized access to the online service as a member of a particular user group, and initiating transactions that cause negative effects on impacted entities. The impact analysis considers the following dimensions:
The impact analysis SHALL consider the level of impact for each impact category for each type of impacted entity if an intruder obtains unauthorized access as a member of each user group. Because different sets of transactions are available to each user group, it is important to consider each user group separately for this analysis.
For example, for an online service that allows for the control, operation, and monitoring of a water treatment facility, each group of users (e.g., technicians who control and operate the facility, auditors and monitoring officials, system administrators) is considered separately based on the transactions available to that user group through the online service. The impact analysis assesses the level of impact (i.e., Low, Moderate or High) on various impacted entities (e.g., citizens who drink the water, the organization that owns the facility, auditors, monitoring officials) for each of the impact categories being considered if a bad actor obtains unauthorized access to the online service as a member of a user group.
The impact analysis SHALL be performed for each user group that has access to the online service. For each impact category, the impact level is estimated for each impacted entity as a result of a compromise of the online service caused by failures in the identity management functions.
If there is no harm or impact for a given impact category for any entity, the impact level can be marked as None.
The output of this impact analysis is a set of impact levels for each user group that SHALL be documented in a suitable format for further analysis in accordance with Sec. 3.4.
The impact assessment levels for each user group are combined to establish a single impact level to represent the risks to impacted entities from a compromise of identity proofing, authentication, and/or federation functions for that user group.
Organizations can apply a variety of methods for this combinatorial analysis, such as:
Organizations SHALL document the approach they use to combine their impact assessment into an overall impact level for each of their defined user groups and SHALL apply it consistently across all of its online services. At the conclusion of the combinatorial analysis, organizations SHALL document the impact for each user group.
The outcome of this step is an effective impact level for each user group due to a compromise of the identity management system functions (i.e., identity proofing, authentication, federation).
The effective impact level (i.e., Low, Moderate, or High) serves as a primary input to the process of selecting the initial assurance level for each user group (see Sec. 3.3.1) to identify the corresponding set of baseline digital identity controls from the requirements and guidelines in the companion volumes [SP800-63A], [SP800-63B], and [SP800-63C]. The resulting initial assurance level for each user group applies to all three digital identity system functions (i.e., identity proofing, authentication, and federation).
The initial set of selected digital identity controls and processes will be assessed and tailored in Step 4 based on potential risks generated by the identity management system.
Depending on the functionality and deployed architecture of the online service, the support of one or more of the identity management functions (i.e., identity proofing, authentication, and federation) may be required. The strength of these functions is described in terms of assurance levels. The RP SHALL identify the types of assurance levels that apply to their online service from the following:
A summary of each of the xALs is provided below. While high-level descriptions of the assurance levels are provided in this subsection, readers of this guideline are encouraged to refer to companion volumes [SP800-63A], [SP800-63B], and [SP800-63C] for normative guidelines and requirements for each assurance level.
IAL1: IAL1 supports the real-world existence of the claimed identity and provides some assurance that the applicant is associated with that identity. Core attributes are obtained from identity evidence or self-asserted by the applicant. All core attributes are validated against authoritative or credible sources, and steps are taken to link the attributes to the person undergoing the identity proofing process.
IAL2: IAL2 requires collecting additional evidence and a more rigorous process for validating the evidence and verifying the identity.
IAL3: IAL3 adds the requirement for a trained CSP representative (i.e., proofing agent) to interact directly with the applicant, as part of an on-site attended identity proofing session, and the collection of at least one biometric.
Table 1 describes the control objectives (i.e., attack protections) for each identity assurance level.
IAL | Control Objectives | User Profile |
---|---|---|
IAL1 | Limit highly scalable attacks. Protect against synthetic identity. Protect against attacks that use compromised personal information. | Access to personal information is required but limited. User actions are limited (e.g., viewing and making modifications to individual personal information). Fraud cannot be directly perpetrated through available user functions. Users cannot receive payments until an offline or manual process is conducted. |
IAL2 | Limit scaled and targeted attacks. Protect against basic evidence falsification and theft. Protect against basic social engineering. | Users can view and change financial information (e.g., a direct deposit location). Individuals can directly perpetrate financial fraud through the available application functionality. A user can view or modify other users’ personal information. Users have visibility into or access to proprietary information. |
IAL3 | Limit sophisticated attacks. Protect against advanced evidence falsification, theft, and repudiation. Protect against advanced social engineering attacks. | Users have direct access to multiple highly sensitive records; administrator access to servers, systems, or security data; the ability to access large sets of data that may reveal sensitive information about one or many users; or access that could result in a breach that would constitute a major incident under OMB guidance. |
AAL1: AAL1 provides basic confidence that the claimant controls an authenticator that is bound to the subscriber account. AAL1 requires either single-factor or multi-factor authentication using a wide range of available authentication technologies. Verifiers are expected to make multi-factor authentication options available at AAL1 and encourage their use. Successful authentication requires the claimant to prove possession and control of the authenticator through a secure authentication protocol.
AAL2: AAL2 provides high confidence that the claimant controls one or more authenticators that are bound to the subscriber account. Proof of the possession and control of two distinct authentication factors is required through the use of secure authentication protocols. Approved cryptographic techniques are required.
AAL3: AAL3 provides very high confidence that the claimant controls authenticators that are bound to the subscriber account. Authentication at AAL3 is based on the proof of possession of a key through the use of a cryptographic protocol and either an activation factor or a password. AAL3 authentication requires the use of a public-key cryptographic authenticator with a non-exportable private key that provides phishing resistance. Approved cryptographic techniques are required.
Table 2 describes the control objectives (i.e., attack protections) for each authentication assurance level.
AAL | Control Objectives | User Profile |
---|---|---|
AAL1 | Provide minimal protections against attacks. Deter password-focused attacks. | No personal information is available to any users, but some profile or preference data maybe retained to support usability and the customization of applications. |
AAL2 | Require multifactor authentication. Offer phishing-resistant options. | Individual personal information can be viewed or modified by users. Limited proprietary information can be viewed by users. |
AAL3 | Require phishing resistance and verifier compromise protections. | Highly sensitive information can be viewed or modified. Multiple proprietary records can be viewed or modified by users. Privileged user access could result in a breach that would constitute a major incident under OMB guidance. |
FAL1: FAL1 provides a basic level of protection for federation transactions and supports a wide range of use cases and deployment decisions.
FAL2: FAL2 provides a high level of protection for federation transactions and additional protection against a variety of attacks against federated systems, including attempts to inject assertions into a federated transaction.
FAL3: FAL3 provides a very high level of protection for federation transactions and establishes very high confidence that the information communicated in the federation transaction matches what was established by the CSP and IdP.
Table 3 describes the control objectives (i.e., attack protections) for each federation assurance level.
FAL | Control Objectives | User Profile |
---|---|---|
FAL1 | Protect against forged assertions. | No sensitive personal information is available to any users but some profile or preference data may be retained to support usability or the customization of applications. |
FAL2 | Protect against forged assertions and injection attacks. | Users can access personal information and other sensitive data with appropriate authentication assurance levels (e.g., AAL2 or above). |
FAL3 | Protect against IdP compromise. | Federation primarily supports attribute exchange. Users have access to classified or highly sensitive information or services that could result in a breach that would constitute a major incident under OMB guidance. |
\clearpage
Organizations SHALL develop and document a process and governance model for selecting initial assurance levels and controls based on the potential impacts of failures in the digital identity system. The following subsections provide guidance on the major elements to consider in the process for selecting initial assurance levels.
The overall impact level for each user group is used as the basis for selecting the initial assurance level and related technical and process controls for the online service under assessment based on the impacts of failures within the digital identity functions. The initial assurance levels and controls can be further assessed and tailored in the next step of the DIRM process.
Although the initial impact assessment (see Sec. 3.2) and the combined impact level determination for each user group (see Sec. 3.2.4) do not differentiate between identity proofing, authentication, and federation risks, the selected initial xALs may still be different. For example, the initial impact assessment may be low impact and indicate IAL1 and FAL1 but may also determine that personal information is accessible and therefore requires AAL2. Similarly, the impact assessment may determine that no proofing is required, resulting in no IAL regardless of the baselines for authentication and federation. Further changes may result from the tailoring process as discussed in Step 4: Tailoring.
The output of this step is a set of initial xALs that are applicable to the online service for each user group.
Before selecting an initial assurance level, RPs must determine whether identity proofing is needed for the users of their online services. Identity proofing is not required if the online service does not need any personal information to execute digital transactions. If personal information is needed, the RP must determine whether validated attributes are required or if self-asserted attributes are acceptable. The system may also be able to operate without identity proofing if the potential harms from accepting self-asserted attributes are insignificant. In such cases, the identity proofing processes described in [SP800-63A] are not applicable to the system.
If the online service does require identity proofing, an initial IAL is selected through a simple mapping process:
The organization SHALL document whether identity proofing is required for their application and, if it is, SHALL select an initial IAL for each user group based on the effective impact level determination from Sec. 3.2.4.
The IAL reflects the level of assurance that an applicant holds the claimed real-life identity. The initial selection assumes that higher potential impacts of failures in the identity proofing process should be mitigated by higher assurance processes.
Authentication is needed for online services that offer access to personal information, protected information, or subscriber accounts. Organizations should consider the legal, regulatory, or policy requirements that govern online services when making decisions regarding the application of authentication assurance levels and authentication mechanisms. For example, [EO13681] states that “all organizations making personal data accessible to citizens through digital applications require the use of multiple factors of authentication,” which requires a minimum selection of AAL2 for applications that meet those criteria.
If the online service requires authentication to be implemented, an initial AAL is selected through a simple mapping process:
The organization SHALL document whether authentication is needed for their online service and, if it is, SHALL select an initial AAL for each user group based on the effective impact level determination from Sec. 3.2.4.
The AAL reflects the level of assurance that the claimant is the same individual to whom the authenticator was registered. The initial selection assumes that higher potential impacts of failures in the authentication process should be mitigated by higher assurance processes.
Identity federation brings many benefits, including a convenient customer experience that avoids redundant, costly, and often time-consuming identity processes. The benefits of federation through a general-purpose IdP model or a subscriber-controlled wallet model are covered in Sec. 5 of [SP800-63C]. However, not all online services will be able to make use of federation, whether for risk-based reasons or due to legal or regulatory requirements. Consistent with [M-19-17], federal agencies that operate online services SHOULD implement federation as an option for user access.
If the online service implements identity federation, an initial FAL is selected through a simple mapping process:
The organization SHALL document whether federation will be used for their online service and, if it is, SHALL select an initial FAL for each user group based on the effective impact level determination from Sec. 3.2.4.
For online services that are assessed to be high impact, organizations SHALL conduct a further assessment to evaluate the risk of a compromised IdP to determine whether FAL2 or FAL3 is more appropriate for their use case. Considerations SHOULD include the type of data being accessed, the location of the IdP (e.g., whether the IdP is internal or external to their enterprise boundary), and the availability of bound authenticators or holder-of-key capabilities.
The FAL reflects the level of assurance in identity assertions that convey the results of authentication processes and relevant identity information to RP online services. The preliminary selection assumes that higher potential impacts of failures in federated identity architectures should be mitigated by higher assurance processes.
The selection of the initial assurance levels for each user group and each of the applicable identity functions (i.e., IAL, AAL, and FAL) serves as the basis for selecting the baseline digital identity controls from the guidelines in companion volumes [SP800-63A], [SP800-63B], and [SP800-63C]. As described in Sec. 3.4, the baseline controls include technical and process controls that will be assessed against additional potential impacts.
Using the initial xALs selected in Sec. 3.3.3, the organization SHALL identify the applicable baseline controls for each user group as follows:
While online service providers must assess and determine the xALs that are appropriate for protecting their applications, the selection of these assurance levels does not mean that the online service provider must implement the related technical and process controls independently. Based on the identity model that the online service provider implements, some or all of the assurance levels and related controls may be implemented by an external entity, such as a third-party CSP or IdP.
The output of this step is a set of assigned xALs and baseline controls for each user group.
The second dimension of risk addressed by the DIRM process focuses on risks from the identity management system that represent the unintended negative consequences of the initial selection of xALs and related technical and process controls in Sec. 3.3.4.
Tailoring provides a process to modify an initially assessed assurance level and implement compensating or supplemental controls based on ongoing detailed risk assessments. It provides a pathway for flexibility and enables organizations to achieve risk management objectives that align with their specific context, users, and threat environments. This process focuses on assessing the risks posed by the identity system itself, specific environmental threats, and privacy and customer experience impacts. It does not prioritize any specific risk area or outcomes for organizations. Making decisions that balance different types of risks to meet organizational outcomes remains the responsibility of organizations.
While organizations are required to implement and document a tailoring process, this guideline does not require the initial assurance levels or control sets to be modified as a result. However, organizations are expected to complete the assessments in the tailoring section to fully account for the outcomes of their selected initial assurance levels.
Within the tailoring step, organizations SHALL focus on impacts to mission delivery due to the implementation of identity management controls, including the possibility of legitimate users who are unable to access desired online services or experience sufficient friction or frustration with the identity system (and technology selection) that they abandon attempts to access the online service.
As a part of the tailoring process, organizations SHALL review the Digital Identity Acceptance Statements and practice statements1 from CSPs and IdPs that they use or intend to use. However, organizations SHALL also conduct their own analysis to ensure that the organization’s specific mission and the communities being served by the online service are given due consideration for tailoring purposes. As a result, the organization MAY require their chosen CSP to strengthen or provide optionality in the implementation of certain controls to address risks and unintended impacts to the organization’s mission and the communities served.
To promote interoperability and consistency, organizations SHOULD implement their assessed or tailored xALs consistent with the normative guidance in this document. However, these guidelines provide flexibility to allow organizations to tailor the initial xALs and related controls to meet specific mission needs, address unique risk appetites, and provide secure and accessible online services. In doing so, CSPs MAY offer and organizations MAY utilize tailored sets of controls that differ from the normative statements in this guideline.
Organizations SHALL establish and document their xAL tailoring process. At a minimum, this process:
The tailoring process promotes a structured means of balancing risks and impacts while protecting online services, systems, and data in a manner that enables mission success and supports security, customer experience, and privacy for individuals.
When selecting and tailoring assurance levels for specific online services, considerations extend beyond the initial impact assessment in Sec. 3.2. When progressing from the initial assurance level selection in Sec. 3.3.4 to the final xAL selection and implementation, organizations SHALL conduct detailed assessments of the controls defined for the initially selected xALs to identify potential impacts in the operational environment.
At a minimum, organizations SHALL assess the impacts and potential unintended consequences related to the following areas:
Organizations SHOULD leverage consultation and feedback from the entities and communities served to ensure that the tailoring process addresses their known constraints.
Organizations SHOULD also conduct additional business-specific assessments as appropriate to fully represent mission- and domain-specific considerations that have not been captured here. All assessments applied during the tailoring phase SHALL be extended to any compensating or supplemental controls, as defined in Sec. 3.4.2 and Sec. 3.4.3. While identity system costs are not specifically included as an input for DIRM processes or as a metric for continuous evaluation, the costs and cost effectiveness of implementation and long-term operation are inherent considerations for responsible program and risk management. Based on their available funding and resources, organizations will likely need to make trade-offs that can be more effectively informed by the DIRM process and its outputs. Any cost-based decisions that result in modifications to assessed xALs or baseline controls SHALL be documented in the Digital Identity Acceptance Statement (see Sec. 3.4.4).
The outcome of this step is a set of risk assessments for privacy, customer experience, threat resistance, and other dimensions that inform the tailoring of the initial assurance levels and the selection of compensating and supplemental controls.
A compensating control is a management, operational, or technical control employed by an organization in lieu of a normative control (i.e., SHALL statements) in the defined xALs. To the greatest degree practicable, a compensating control is intended to address the same risks as the baseline control it is replacing. Organizations MAY choose to implement a compensating control if they are unable to implement a baseline control or when a risk assessment indicates that a compensating control sufficiently mitigates risk in alignment with organizational risk tolerance. This control MAY be a modification to the normative statements defined in these guidelines or MAY be applied elsewhere in an online service, digital transaction, or service life cycle. For example:
Where compensating controls are implemented, organizations SHALL document the compensating control, the rationale for the deviation, comparability of the chosen alternative, and any resulting residual risks. CSPs and IdPs that implement compensating controls SHALL communicate this information to all potential RPs prior to integration to allow the RP to assess and determine the acceptability of the compensating controls for their use cases.
The process of tailoring allows organizations and service providers to make risk-based decisions regarding how they implement their xALs and related controls. It also provides a mechanism for documenting and communicating decisions through the Digital Identity Acceptance Statement described in Sec. 3.4.4.
Supplemental controls may be added to further strengthen the baseline controls specified for the organization’s selected assurance levels. Organizations SHOULD identify and implement supplemental controls to address specific threats in the operational environment that may not be addressed by the baseline controls. For example:
Any supplemental controls SHALL be assessed for impacts based on the same factors used to tailor the organization’s assurance level and SHALL be documented.
Organizations SHALL develop a Digital Identity Acceptance Statement (DIAS) to document the results of the DIRM process for (i) each online service managed by the organization, and (ii) each external online service used to support the mission of the organization, including software-as-a-service offerings (e.g., social media platforms, email services, online marketing services). RPs who intend to use a particular CSP/IdP SHALL review the latter’s DIAS and incorporate relevant information into the organization’s DIAS for each online service.
Organizations SHALL prepare a DIAS for their online service that includes, at a minimum:
Federal agencies SHOULD include this information in the information system authorization package described in [NISTRMF].
CSPs/IdPs SHALL implement the DIRM process and develop a DIAS for the services they offer if they deviate from the normative guidance in these guidelines, including when supplemental or compensating controls are added. To complete a DIRM of their offered assurance levels and controls, CSPs/IdPs MAY base their assessment on anticipated or representative digital identity services that they wish to support. In creating this risk assessment, they SHOULD seek input from real-world RPs on their user populations and anticipated context. The DIAS prepared by a CSP SHALL include, at a minimum:
The DIRM process for external online services used by the organization SHALL consider relevant inputs from the provider of the service and document the results in a DIAS. The DIAS prepared by the organization for external online services SHALL include, at a minimum:
The final implemented xALs do not all need to be at the same level. There may be variance based on the functions of the online service, the impact assessment, and the tailoring process.
Continuous improvement is a critical tool for keeping pace with the threat and technology environment and identifying programmatic gaps that need to be addressed to balance risk management objectives. For instance, an organization may determine that a portion of the target population intended to be served by the online service does not have access to affordable high-speed internet services, which are needed to support remote identity proofing. The organization could bridge this gap by establishing a program that offers local, in-person proofing services within the community. This could involve providing appointments with proofing agents who can meet individuals at more accessible locations, such as their local community center, the nearest post office, a partner business facility, or even at the individual’s home.
To address the shifting environment in which they operate and more rapidly address service capability gaps, organizations SHALL implement a continuous evaluation and improvement program that leverages input from end users who have interacted with the identity management system as well as performance metrics for the online service. This program SHALL be documented, including the metrics that are collected, the sources of data required to enable performance evaluation, and the processes in place for taking timely actions based on the continuous improvement process. This program and its effectiveness SHOULD be assessed on a regular basis to ensure that outcomes are being achieved and that programs are addressing issues in a timely manner.
Additionally, organizations SHALL monitor the evolving threat landscape to stay informed of the latest threats and fraud tactics. Organizations SHALL regularly assess the effectiveness of current security measures and fraud detection capabilities against the latest threats and fraud tactics.
To fully understand the performance of their identity system, organizations will need to identify critical inputs to their continuous evaluation process. At a minimum, these inputs SHALL include:
RPs SHALL document their metrics, reporting requirements, and data inputs for any CSP, IdP, or other integrated identity service to ensure that expectations are appropriately communicated to partners and vendors.
The exact metrics available to organizations will vary based on the technologies, architectures, and deployment methods they use. Additionally, the availability and usefulness of certain metrics will vary over time. Therefore, these guidelines do not attempt to define a comprehensive set of metrics for all scenarios. Table 4 provides a set of recommended metrics that organizations SHOULD track as part of their continuous evaluation program. However, organizations are not constrained by this table and SHOULD implement metrics based on their specific systems, technology, and program needs. See [SP800-55V2] for more information on identifying additional performance metrics. In Table 4, all references to unique users include both legitimate users and imposters.
Title | Description | Type |
---|---|---|
Pass Rate (Overall) | Percentage of unique users who successfully complete identity proofing | Proofing |
Pass Rate (Per Proofing Type) | Percentage of unique users who are successfully proofed for each offered type (i.e., remote unattended, remote attended, on-site attended, on-site unattended) | Proofing |
Fail Rate (Overall) | Percentage of unique users who start the identity proofing process but are unable to successfully complete all of the steps | Proofing |
Estimated Adjusted Fail Rate | Percentage of failures adjusted to account for identity proofing attempts that are suspected to be fraudulent | Proofing |
Fail Rate (Per Proofing Type) | Percentage of unique users who do not complete proofing due to a process failure for each offered type (i.e., remote unattended, remote attended, on-site attended, on-site unattended) | Proofing |
Abandonment Rate (Overall) | Percentage of unique users who start the identity proofing process but do not complete it without failing a process | Proofing |
Abandonment Rate (Per Proofing Type) | Percentage of unique users who start a specific type of identity proofing process but do not complete it without failing a process | Proofing |
Failure Rates (Per Proofing Process Step) | Percentage of unique users who are unsuccessful at completing each identity proofing step in a CSP process | Proofing |
Completion Times (Per Proofing Type) | Average time that it takes a user to complete each defined proofing type offered as part of an identity service | Proofing |
Authenticator Type Usage | Percentage of subscribers who have an active authenticator by each type available | Authentication |
Authentication Failures | Percentage of authentication events that fail (not to include authentication attempts that are successful after re-entry of an authenticator output) | Authentication |
Account Recovery Attempts | The number of account or authenticator recovery processes initiated by subscribers | Authentication |
Confirmed Unauthorized Access or Fraud | Percentage of total transaction events (i.e., identity proofing + authentication events) that the organization determines to be unauthorized or fraudulent through analysis or self-reporting | Fraud |
Suspected Unauthorized Access or Fraud | Percentage of total transaction events (i.e., identity proofing + authentication events) that are suspected to be unauthorized or fraudulent | Fraud |
Reported Unauthorized Access or Fraud | Percentage of total transaction events (i.e., identity proofing + authentication events) reported to be unauthorized or fraudulent by users | Fraud |
Unauthorized Access or Fraud (Per Proofing Type) | Number of identity proofing events that are suspected or reported as being fraudulent for each available type of proofing | Fraud |
Unauthorized Access or Fraud (Per Authentication Type) | Number of authentication events that are suspected or reported to be unauthorized or fraudulent by each available type of authentication | Fraud |
Help Desk Calls | Number of calls received by the CSP or identity service | Customer Experience |
Help Desk Calls (Per Type) | Number of calls received related to each offered service (e.g., proofing failures, authenticator resets, complaints) | Customer Experience |
Help Desk Resolution Times | Average length of time that it takes to resolve a complaint or help desk ticket | Customer Experience |
Customer Satisfaction Surveys | The results of customer feedback surveys conducted by CSPs, RP, or both | Customer Experience |
Redress Requests | The number of redress requests received related to the identity management system | Customer Experience |
Redress Resolution Times | The average time it takes to resolve redress requests related to the identity management system | Customer Experience |
The data used to generate continuous evaluation metrics may not always reside with the identity program or the organizational entity responsible for identity management systems. The intent of these metrics is to integrate with existing data sources whenever possible to collect information that is critical to identity program evaluation. For example, customer service representative (CSR) teams may already have substantial information on customer requests, complaints, or concerns. Organizations that implement and maintain identity management systems are expected to coordinate with these teams to acquire the information needed to discern identity management system-related complaints or issues.
A primary goal of continuous improvement is to enhance customer experience, usability, and accessibility outcomes for different user populations. As a result, the metrics collected by organizations SHOULD be further evaluated to provide insights into the performance of their identity management systems for their supported communities. Where possible, these efforts SHOULD avoid the collection of additional personal information and instead use informed analyses of proxy data to identify potential performance issues. This can include comparing and filtering the metrics to understand deviations in performance across different user populations based on other available data, such as zip code, geographic region, age, or sex.
Designing services that support a wide range of populations requires processes to adjudicate issues and provide redress3 as warranted. Service failures, disputes, and other issues tend to arise as part of normal operations, and their impacts can vary broadly, from minor inconveniences to major disruptions or damage. Furthermore, the same issue experienced by one person or community as an inconvenience can have disproportionately damaging impacts on other individuals and communities.
To enable access to critical online services while deterring identity-related fraud and cybersecurity threats, it is essential for organizations to plan for potential issues and to design redress approaches that aim to be fair, transparent, easy for legitimate claimants to navigate, and resistant to exploitation attempts.
Understanding when and how harms might be occurring is a critical first step for organizations to take informed action. Continuous evaluation and improvement programs can play a key role in identifying instances and patterns of potential harm. Moreover, there may be business processes in place outside of those established to support identity management that can be leveraged as part of a comprehensive approach to issue adjudication and redress. Beyond these activities, additional practices can be implemented to ensure that users of identity management systems are able to voice their concerns and have a path to redress. Requirements for these practices include:
Organizations are encouraged to consider these and other emerging redress practices. Prior to adopting any new redress practice, including supporting technology, organizations SHOULD test the practice with users to avoid the introduction of unintended consequences, particularly those that may counteract or contradict the goals associated with redress. In addition, organizations SHALL assess the integrity and performance of their redress mechanisms and implement controls to prevent, detect, and remediate attempted identity fraud involving those mechanisms.
The close coordination of identity functions with teams that are responsible for cybersecurity, privacy, threat intelligence, fraud detection, and program integrity enables a more complete protection of business capabilities and constant improvement. For example, payment fraud data collected by program integrity teams could provide indicators of compromised subscriber accounts and potential weaknesses in identity proofing implementations. Similarly, threat intelligence teams may learn of new TTPs that could impact identity proofing, authentication, and federation processes. Organizations SHALL establish consistent mechanisms for the exchange of information between stakeholers that are responsible for critical internal security and fraud prevention. Organizations SHOULD do the same for external stakeholders and identity services that comprise their online services.
When organizations are supported by external identity providers (e.g., CSPs), the exchange of data related to security, fraud, and other RP functions may be complicated by regulations or policy. However, establishing the necessary mechanisms and guidelines to enable effective information sharing SHOULD be considered in contractual and legal mechanisms. All data collected, transmitted, or shared by the identity service provider SHALL be subject to a detailed privacy and legal assessment by either the entity generating the data (e.g., a CSP) or the related RP for whom the service is provided.
Coordination and integration with various organizational functional teams can help to achieve better outcomes for the identity functions. Ideally, such coordination is performed throughout the risk management process and operations life cycle. Companion volumes [SP800-63A], [SP800-63B], and [SP800-63C] provide specific fraud mitigation requirements related to each of the identity functions.
Identity solutions use artificial intelligence (AI) and machine learning (ML) in various ways, such as improving the performance of biometric matching systems, automating evidence or attribute validation, detecting fraud, and even assisting users (e.g., chatbots). While the potential applications of AI and ML are extensive, these technologies may also introduce new risks or produce unintended negative outcomes.
The following requirements apply to all uses of AI and ML in the identity system, regardless of how they are used:
Further information on practice statements and their contents can be found in Sec. 3.1 of SP 800-63A. ↩
For more information about privacy risk assessments, refer to the NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management at https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.01162020.pdf. ↩
Redress generally refers to a remedy that is made after harm occurs. ↩
This section is informative.
[EO13681] Obama B (2014) Improving the Security of Consumer Financial Transactions. (The White House, Washington, DC), Executive Order 13681, October 17, 2014. Available at https://www.federalregister.gov/d/2014-25439
[FIPS199] National Institute of Standards and Technology (2004) Standards for Security Categorization of Federal Information and Information Systems. (U.S. Department of Commerce, Washington, DC), Federal Information Processing Standards Publication (FIPS) 199. https://doi.org/10.6028/NIST.FIPS.199
[U.S.C3552] 44 U.S.C. 3552 - Definitions - Content Details - USCODE-2014-title44-chap35-subchapII-sec3552 Available at https://www.govinfo.gov/app/details/USCODE-2014-title44/USCODE-2014-title44-chap35-subchapII-sec3552
[FIPS201] National Institute of Standards and Technology (2022) Personal Identity Verification (PIV) of Federal Employees and Contractors. (U.S. Department of Commerce, Washington, DC), Federal Information Processing Standards Publication (FIPS) 201-3. https://doi.org/10.6028/NIST.FIPS.201-3
[FISMA] Federal Information Security Modernization Act of 2014, Pub. L. 113-283, 128 Stat. 3073. Available at https://www.govinfo.gov/app/details/PLAW-113publ283
[ISO/IEC9241-11] International Standards Organization (2018) ISO/IEC 9241-11 Ergonomics of human-system interaction – Part 11: Usability: Definitions and concepts (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/63500.html
[M-03-22] Office of Management and Budget (2003) OMB Guidance for Implementing the Privacy Provisions of the E-Government Act of 2002. (The White House, Washington, DC), OMB Memorandum M-03-22, September 26, 2003. Available at https://georgewbush-whitehouse.archives.gov/omb/memoranda/m03-22.html
[M-19-17] Office of Management and Budget (2019) Enabling Mission Delivery through Improved Identity, Credential, and Access Management. (The White House, Washington, DC), OMB Memorandum M-19-17, May 21, 2019. Available at https://www.whitehouse.gov/wp-content/uploads/2019/05/M-19-17.pdf
[NISTAIRMF] Tabassi E (2023) Artificial Intelligence Risk Management Framework (AI RMF 1.0). (National Institute of Standards and Technology, Gaithersburg, MD), NIST AI 100-1. https://doi.org/10.6028/NIST.AI.100-1
[NISTIR8062] Brooks SW, Garcia ME, Lefkovitz NB, Lightman S, Nadeau EM (2017) An Introduction to Privacy Engineering and Risk Management in Federal Systems. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Interagency or Internal Report (IR) NIST IR 8062. https://doi.org/10.6028/NIST.IR.8062
[NISTRMF] Joint Task Force (2018) Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-37r2. https://doi.org/10.6028/NIST.SP.800-37r2
[NISTPF] National Institute of Standards and Technology (2020) NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management, Version 1.0. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Cybersecurity White Paper (CSWP) NIST CSWP 10. https://doi.org/10.6028/NIST.CSWP.10
[PrivacyAct] Privacy Act of 1974, Pub. L. 93-579, 5 U.S.C. § 552a, 88 Stat. 1896 (1974). Available at https://www.govinfo.gov/content/pkg/USCODE-2018-title5/pdf/USCODE-2018-title5-partI-chap5-subchapII-sec552a.pdf
[RFC5280] Cooper D, Santesson S, Farrell S, Boeyen S, Housley R, Polk W (2008) Internet X.509 Public Key Infrastructure Certification and Certificate Revocation List (CRL) Profile. (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 5280. https://doi.org/10.17487/RFC5280
[RFC8446] Rescorla E (2018) The Transport Layer Security (TLS) Protocol Version 1.3. (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 8446. https://doi.org/10.17487/RFC8446
[RFC9325] Sheffer Y, Saint-Andre P, Fossati T (2022) Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS). (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 9325. https://doi.org/10.17487/RFC9325
[SP800-30] Blank R, Gallagher P (2012) Guide for Conducting Risk Assessments. (National Institute of Standards and Technology, Gaithersburg, MD) NIST Special Publication (SP) NIST SP 800-30r1. https://doi.org/10.6028/NIST.SP.800-30r1
[SP800-52] McKay K, Cooper D (2019) Guidelines for the Selection, Configuration, and Use of Transport Layer Security (TLS) Implementations. (National Institute of Standards and Technology), NIST Special Publication (SP) NIST SP 800-52r2. https://doi.org/10.6028/NIST.SP.800-52r2
[SP800-53] Joint Task Force (2020) Security and Privacy Controls for Information Systems and Organizations. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-53r5, Includes updates as of December 10, 2020. https://doi.org/10.6028/NIST.SP.800-53r5
[SP800-55V2] Schroeder K, Trinh H, Pillitteri V (2024) Measurement Guide for Information Security: Volume 2 — Developing an Information Security Measurement Program. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-55 Vol. 2. https://doi.org/10.6028/NIST.SP.800-55v2
[SP800-57Part1] Barker EB (2020) Recommendation for Key Management: Part 1 – General. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-57pt1r5. https://doi.org/10.6028/NIST.SP.800-57pt1r5
[SP800-63A] Temoshok D, Abruzzi C, Choong YY, Fenton JL, Galluzzo R, LaSalle C, Lefkovitz N, Regenscheid A, Vachino M (2025) Digital Identity Guidelines: Identity Proofing and Enrollment. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-63A-4. https://doi.org/10.6028/NIST.SP.800-63A-4
[SP800-63B] Temoshok D, Fenton JL, Choong YY, Lefkovitz N, Regenscheid A, Galluzzo R, Richer JP (2025) Digital Identity Guidelines: Authentication and Authenticator Management. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-63B-4. https://doi.org/10.6028/NIST.SP.800-63B-4
[SP800-63C] Temoshok D, Richer JP, Choong YY, Fenton JL, Lefkovitz N, Regenscheid A, Galluzzo R (2025) Digital Identity Guidelines: Federation and Assertions. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-63C-4. https://doi.org/10.6028/NIST.SP.800-63C-4
[SP800-122] McCallister E, Grance T, Scarfone KA (2010) Guide to Protecting the Confidentiality of Personally Identifiable Information (PII). (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-122. https://doi.org/10.6028/NIST.SP.800-122
This section is informative.
A wide variety of terms are used in the realm of digital identity. While many definitions are consistent with earlier versions of SP 800-63, some have changed in this revision. Many of these terms lack a single, consistent definition, warranting careful attention to how the terms are defined here.
\clearpage
\clearpage
\clearpage
\clearpage
One-way — It is computationally infeasible to find any input that maps to any pre-specified output.
Collision-resistant — It is computationally infeasible to find any two distinct inputs that map to the same output.
Compute the corresponding public key,
Compute a digital signature that may be verified by the corresponding public key,
Decrypt keys that were encrypted by the corresponding public key, or
Compute a shared secret during a key-agreement transaction.
Verify a digital signature that was generated using the corresponding private key,
Encrypt keys that can be decrypted using the corresponding private key, or
Compute a shared secret during a key-agreement transaction.
SP 800-63-1 updated NIST SP 800-63 to reflect current authenticator (then referred to as “token”) technologies and restructured it to provide a better understanding of the digital identity architectural model used here. Additional (minimum) technical requirements were specified for the CSP, protocols used to transport authentication information, and assertions if implemented within the digital identity model.
SP 800-63-2 was a limited update of SP 800-63-1 and substantive changes were only made in Sec. 5, Registration and Issuance Processes. The significant changes were intended to facilitate the use of professional credentials in the identity proofing process and to reduce the need to send postal mail to an address of record to issue credentials for level 3 remote registration. Other changes to Sec. 5 were minor explanations and clarifications.
SP 800-63-3 was a substantially updated and restructured SP 800-63-2. It introduces individual components of digital authentication assurance (i.e., AAL, IAL, and FAL) to support the growing need for independent treatment of authentication strength and confidence in an individual’s claimed identity (e.g., in strong pseudonymous authentication). A risk assessment methodology and its application to IAL, AAL, and FAL were included in this guideline. It also moved the whole of digital identity guidance covered under SP 800-63 from a single document describing authentication to a suite of four documents (to separately address the individual components mentioned above) of which SP 800-63-3 is the top-level document.
Other areas updated in SP 800-63-3 included:
SP 800-63-4 substantially updates and reorganizes SP 800-63-3 including: