View this document as: a single page | multiple pages.

Threats and Security Considerations

This section is informative.

Effectively protecting identity proofing processes requires layering security controls and processes throughout a transaction with a given applicant. To achieve this, it is necessary to understand where and how threats can arise and compromise enrollments. There are four general categories of threats to the identity proofing process:

  1. Impersonation: An attacker attempts to pose as another legitimate individual (e.g., identity theft)
  2. False or fraudulent representation: An attacker attempts to create a false identity or false claims about an identity (e.g., synthetic identity fraud)
  3. Social engineering attacks: An attacker uses deception or coercion to convince a victim to take some action during the identity proofing process with the ultimate goal of controlling access to the resulting subscriber account
  4. Infrastructure attacks: An attacker attempts to compromise the confidentiality, availability, or integrity of the infrastructure, data, software, or people supporting the CSP’s identity proofing process (e.g., distributed denial of service, insider threats)

This section focuses on impersonation attacks, false or fraudulent representation threats, and social engineering attacks. Infrastructure threats are addressed by traditional computer security controls (e.g., intrusion protection, record keeping, independent audits) and are outside of the scope of this document. For additional security controls beyond what are provided in these guidelines, see [SP800-53], Recommended Security and Privacy Controls for Federal Information Systems and Organizations.

This section does not provide guidance or controls that specifically address AI as a discrete threat type. Instead, the mitigations below and the requirements in these guidelines address specific threats that may be perpetrated or scaled by attackers with AI tools (e.g., using AI-generated forged documents or media to impersonate an applicant).

Table 2. Identity proofing and enrollment threats

Attack/Threat Description Example
Automated Enrollment Attempts Attacker leverages scripts and automated processes to rapidly generate large volumes of enrollments Bots leverage stolen data to submit benefits claims
Evidence Falsification Attacker creates or modifies evidence in order to claim an identity A fake driver’s license is used as evidence
Synthetic Identity Fraud Attacker fabricates evidence of an identity that is not associated with a real person A credit card opened under a fake name to create a credit file
Fraudulent Use of Identity (Identity Theft) Attacker fraudulently uses another individual’s identity or identity evidence An individual uses a stolen passport
Social Engineering Attacker convinces a legitimate applicant to provide identity evidence or complete the identity proofing process under false pretenses An individual submits their identity evidence to an attacker who is posing as a potential employer
False Claims Attacker associates false attributes or information with a legitimate identity An individual falsely claims residence in a state in order to obtain a benefit that is only available to state residents
Video or Image Injection Attack Attacker creates a fake video feed to impersonate a real life person A deepfake video is used to impersonate an individual portrayed on a stolen driver’s license

Threat Mitigation Strategies

Threats to the enrollment and identity proofing process are summarized in Table 2. Related mechanisms that assist in mitigating these threats are summarized in Table 3. These mitigations should not be considered comprehensive but rather a summary of mitigations that are detailed more thoroughly at each IAL and applied based on the risk assessment processes detailed in Sec. 3 of [SP800-63].

Table 3. Identity proofing and enrollment threat mitigation strategies

Threat/Attack Mitigation Strategies Normative References
Automated Enrollment Attempts Web application firewall (WAF) controls and bot detection technology. Out-of-band engagement (e.g., confirmation codes). Biometric verification and liveness detection mechanisms. Traffic and network analysis capabilities to identify indications of malicious traffic. 3.5, 3.8, 3.11
Evidence Falsification Validation of core attributes with authoritative or credible sources. Validation of physical or digital security features of the presented evidence. 4.1.4, 4.1.5, 4.2.4, 4.2.5, 4.3.4, 4.3.5
Synthetic Identity Fraud Collection of identity evidence. Validation of core attributes with authoritative or credible sources. Biometric comparison of the applicant to validated identity evidence or biometric data. Checks against vital statistics repositories (e.g., Death Master File). 3.2.1, 4.1.2, 4.1.5, 4.1.6, 4.2.2, 4.2.5, 4.2.6, 4.3.2, 4.3.5, 4.3.6
Fraudulent Use of Identity (Identity Theft) Biometric comparison of the applicant to validated identity evidence or biometric data. Presentation attack detection measures to confirm the genuine presence of the applicant. Out-of-band engagement (e.g., confirmation codes) and notice of proofing. Checks against vital statistics repositories (e.g., Death Master File). Fraud, transaction, and behavioral analysis capabilities to identify indicators of potentially malicious account establishment. 3.2.1, 3.8, 3.10, 3.11, 4.1.6, 4.2.6, 4.3.6
Social Engineering Training trusted referees to identify indications of coercion or distress. Out-of-band engagement and notice of proofing to a validated address. Information for and communication with end users on common threats and schemes. On-site in-person attended identity proofing options. 2.1.3, 3.8, 3.11, 3.14, 8.1.4
False Claims Geographic restrictions on traffic. Validation of core attributes with authoritative or credible sources. 3.2.1, 4.1.5, 4.2.5, 4.3.5
Video or Image Injection Attack Use of a combination of active and passive PAD. Use of authenticated protected channels for communications between devices and servers. Authentication of biometric sensors. Monitoring and analysis of incoming video and image files to detect signs of forgery or modification. Use of active countermeasures. 3.8, 3.14

Collaboration With Adjacent Programs

A close coordination of identity proofing and CSP functions with cybersecurity, privacy, threat intelligence, and program integrity teams can enable a more complete protection of business capabilities while constantly improving identity proofing capabilities. For example, payment fraud data collected by program integrity teams could indicate compromised subscriber accounts and potential weaknesses in identity proofing implementations. Similarly, threat intelligence teams may receive indications of new tactics, techniques, and procedures that may impact identity proofing processes. CSPs and RPs should seek to establish consistent mechanisms for exchanging information between critical security and fraud stakeholders. If the CSP is external to the RP, contractual and legal mechanisms can be used to establish these mechanisms, including technical and interoperability considerations. All data collected, transmitted, or shared should be minimized and subject to a detailed privacy and legal assessment.