The characteristics of trustworthy AI are integrated into organizational policies, processes, and procedures.


Policies, processes, and procedures are central components of effective AI risk management and fundamental to individual and organizational accountability.

Organizational policies and procedures will vary based on available resources and risk profiles, but can help systematize AI actor roles and responsibilities throughout the AI lifecycle. Without such policies, risk management can be subjective across the organization, and exacerbate rather than minimize risks over time. Polices, or summaries thereof, are understandable to relevant AI actors. Policies reflect an understanding of the underlying metrics, measurements, and tests that are necessary to support policy and AI system design, development, deployment and use.

Lack of clear information about responsibilities and chains of command will limit the effectiveness of risk management.

Suggested Actions

Establish and maintain formal AI risk management policies that address AI system trustworthy characteristics throughout the system’s lifecycle. Organizational AI policies:

  • Define key terms and concepts related to AI systems and the scope of their purposes and intended uses.
  • Align to broader data governance policies and practices, particularly the use of sensitive or otherwise risky data.
  • Detail standards for experimental design, data quality, and model training.
  • Outline and document risk mapping and measurement processes and standards.
  • Detail model testing and validation processes.
  • Detail review processes for legal and risk functions.
  • Establish the frequency of and detail for monitoring, auditing and review processes.
  • Outline change management requirements.
  • Outline processes for internal and external stakeholder engagement.
  • Establish whistleblower policies to facilitate reporting of serious AI system concerns.
  • Detail and test incident response plans.
  • Verify that formal AI risk management policies align to existing legal standards, and industry best practices and norms.
  • Establish AI risk management policies that broadly align to AI system trustworthy characteristics.
  • Verify that formal AI risk management policies include currently deployed and third-party AI systems.
Transparency and Documentation

Organizations can document the following:

  • To what extent do these policies foster public trust and confidence in the use of the AI system?
  • What policies has the entity developed to ensure the use of the AI system is consistent with its stated values and principles?
  • To what extent are the model outputs consistent with the entity’s values and principles to foster public trust and equity?

AI Transparency Resources:
GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. URL


Off. Comptroller Currency, Comptroller’s Handbook: Model Risk Management (Aug. 2021). URL

GAO, “Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities,” GAO@100 (GAO-21-519SP), June 2021. URL

NIST, “U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools”. URL

Lipton, Zachary and McAuley, Julian and Chouldechova, Alexandra, Does mitigating ML’s impact disparity require treatment disparity? Advances in Neural Information Processing Systems, 2018. URL

SAS Institute, “The SAS® Data Governance Framework: A Blueprint for Success”. URL

ISO, “Information technology — Reference Model of Data Management, “ ISO/IEC TR 10032:200. URL

“Play 5: Create a formal policy,” Partnership on Employment & Accessible Technology (PEAT, URL

“ – Home,” The U.S. Government. URL

Back to Top ↑