Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.


The development of a risk-aware organizational culture starts with defining responsibilities. For example, under some risk management structures, professionals carrying out test and evaluation tasks are independent from AI system developers and report through risk management functions or directly to executives. This kind of structure may help counter implicit biases such as groupthink or sunk cost fallacy and bolster risk management functions, so efforts are not easily bypassed or ignored.

Instilling a culture where AI system design and implementation decisions can be questioned and course- corrected by empowered AI actors can enhance organizations’ abilities to anticipate and effectively manage risks before they become ingrained.

Suggested Actions
  • Establish policies that define the AI risk management roles and responsibilities for positions directly and indirectly related to AI systems, including, but not limited to
    • Boards of directors or advisory committees
    • Senior management
    • AI audit functions
    • Product management
    • Project management
    • AI design
    • AI development
    • Human-AI interaction
    • AI testing and evaluation
    • AI acquisition and procurement
    • Impact assessment functions
    • Oversight functions
  • Establish policies that promote regular communication among AI actors participating in AI risk management efforts.
  • Establish policies that separate management of AI system development functions from AI system testing functions, to enable independent course-correction of AI systems.
  • Establish policies to identify, increase the transparency of, and prevent conflicts of interest in AI risk management, and to counteract confirmation bias and market incentives that may hinder AI risk management efforts.
  • Establish policies that incentivize AI actors to collaborate with existing legal, oversight, compliance, or enterprise risk functions in their AI risk management activities.
Transparency and Documentation

Organizations can document the following:

  • To what extent has the entity clarified the roles, responsibilities, and delegated authorities to relevant stakeholders?
  • Who is ultimately responsible for the decisions of the AI and is this person aware of the intended uses and limitations of the analytic?
  • Are the responsibilities of the personnel involved in the various AI governance processes clearly defined?
  • What are the roles, responsibilities, and delegation of authorities of personnel involved in the design, development, deployment, assessment and monitoring of the AI system?
  • Did your organization implement accountability-based practices in data management and protection (e.g. the PDPA and OECD Privacy Principles)?

AI Transparency Resources:

  • WEF Model AI Governance Framework Assessment 2020. URL
  • WEF Companion to the Model AI Governance Framework- 2020. URL
  • GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. URL

Andrew Smith, “Using Artificial Intelligence and Algorithms,” FTC Business Blog (Apr. 8, 2020). URL

Off. Superintendent Fin. Inst. Canada, Enterprise-Wide Model Risk Management for Deposit-Taking Institutions, E-23 (Sept. 2017).

Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter 11-7 (Apr. 4, 2011).

Off. Comptroller Currency, Comptroller’s Handbook: Model Risk Management (Aug. 2021). URL

ISO, “Information Technology — Artificial Intelligence — Guidelines for AI applications,” ISO/IEC CD 5339. See Section 6, “Stakeholders’ perspectives and AI application framework.” URL

Back to Top ↑