The boardroom conversation around artificial intelligence (AI) has shifted from questioning whether to invest in AI to strategizing how to capitalize on the opportunity AI affords. This sea change—driven by AI’s remarkable capacity to fuel innovation and disrupt competition—has accelerated widespread adoption across nearly every industry. At the same time, organizations are grappling with the very real and unfamiliar risks AI presents. Real-world cases of AI spreading deepfake news, leaking confidential information, and demonstrating bias highlight the externalities that emerge when boards do not prioritize and ensure proper AI governance.
AI Adoption Requires New Approaches to Governance
The pressure to adopt AI responsibly is mounting, while the stakes and challenges of doing so are becoming increasingly complex and multifaceted. AI represents a dynamic and evolving landscape, with shifting regulatory requirements and levels of public trust, compounded by an unprecedented pace of advancement that makes the terrain even harder to navigate.
For corporations and boards, 2025 represents a critical juncture in tuning legacy governance models to an AI future. AI governance has emerged as a key function, guiding organizational policies and processes to address AI’s unique characteristics, such as being based on probability as well as requiring continuous monitoring and adjustment. Boards have a crucial role to play in this arena, especially as AI challenges traditional governance models that can over-index on compliance and risk avoidance to the detriment of innovation and competitiveness.
Forecasts for AI Governance
As companies rethink their business strategies for AI, they must apply the same forward-looking approach to governance. Boards should focus on refining and adapting existing structures and oversight mechanisms to ensure effective AI governance. Looking ahead to 2025, several key developments can help shape these adaptations.
1. As AI adoption increases, AI incidents will continue to trend upward.
Data from the AI Incident Database (AIID), which tracks cases of ethical misuse and unintended consequences of AI, reveals a significant 26 percent increase in reported incidents from 2022 to 2023. Available data for 2024 credibly indicates a further rise of more than 32 percent, and the same is likely to apply to 2025.
Reported AI Incidents
Chart source: Stanford Public Data from 2024 Report, Responsible AI, Fig 3.1.2. Raw data source: AI Incident Database. Report: Stanford Artificial Intelligence Index Report 2024
The persistent growth in incidents likely stems from several interconnected challenges. AI governance remains in its infancy, with risk assessment methods still evolving and far from being universally applied. Additionally, boards often lack adequate visibility into what, if any, governance structures are in place. According to recent data, only 14 percent of boards discuss AI at every meeting, and 45 percent have yet to include AI on their agendas at all. This lack of consistent board engagement likely contributes to insufficient oversight and understanding of AI governance, increasing the risk of potential misuse and unintended consequences.
In the absence of clear regulatory mandates, organizations may lack sufficient incentives to prioritize AI risk management, a critical component of broader AI governance. Furthermore, AI risks can be challenging to anticipate because they not only arise from the models or systems themselves, but also from the broader social and business contexts in which they are used—and how those contexts can naturally change over time. This can be seen from the figure below, which shows AI incident numbers from 2016 to present, highlighting significant variance across sectors. This variation underscores the importance for boards to consider not only technical AI risks but also contextual factors—such as industry-specific regulations, user demographics, and economic and market conditions—that influence how AI operates within specific industries.
AI Incidents by Sector, 2016-Present
Adapted from AIAAIC Repository, with some sectors combined for organization. Retrieved Dec. 2, 2024, from https://www.aiaaic.org/aiaaic-repository and licensed under CC BY-SA 4.0
2. Companies will underinvest in AI governance, suggesting investment in AI adoption is outpacing governance.
Organizations are under growing pressure to demonstrate their ability to succeed in the AI-driven era, motivating them to adopt AI technologies and push the boundaries of its capabilities. Lack of investment in responsible AI practices—such as risk assessments, bias evaluations, and documentation—indicates that governance, which provides the frameworks and oversight needed to implement these practices effectively, is being viewed as an afterthought, rather than an essential enabler for long-term, sustainable value. For example, while 95 percent of senior leaders say their organizations are investing in AI, only 34 percent are incorporating AI governance, and just 32 percent are addressing bias in AI models (2024 EY Pulse Survey).
AI Adoption Outpaces Governance
While 95 percent of senior leaders say their organizations are investing in AI, only 34 percent are incorporating AI governance, and just 32 percent are addressing bias in AI models. 2024 EY Pulse Survey
|
Only 11 percent of executives say they are implementing responsible AI practices across their organizations. 2024 PwC US Responsible AI Survey
|
While there are incentives suggesting that AI governance will gain traction—such as the need to build customer trust, comply with emerging standards, and mitigate reputational risks—governance frameworks are still likely to lag behind the pace of AI advancement.
As the gap between investment in AI innovation and AI governance persists, companies will struggle to ensure responsible AI use, manage risks effectively, and build the necessary infrastructure to support sustainable growth. There is recognition that robust governance can enhance stakeholder confidence, attract ethical investors, and reduce legal liabilities. However, these pressures alone may not be enough to close the gap, underscoring the critical role that boards must play in balancing innovation with risk management.
3. Adoption challenges will persist, resulting in a greater recognition that data foundations are lagging strategic priorities.
With widespread adoption of off-the-shelf generative AI tools, simply hosting a large language model (LLM) will no longer be enough to remain competitive. True differentiation will lie in how organizations move beyond point solutions and tools to leverage their unique business capabilities, harness proprietary data sets, and tailor AI solutions to their specific domains, driving value in ways that competitors cannot easily replicate. However, many organizations still struggle with managing and governing their data in a way that supports the development and deployment of AI.
Poor Data Foundations Undermine AI Progress
This highlights the pervasive and ongoing challenges organizations face in ensuring that their data infrastructure supports the reliable, accurate, and timely information necessary for AI-driven innovation. Ensuring high-quality, well-governed data will be a key challenge and a critical enabler for the advanced AI capabilities needed for meaningful differentiation. For boards, this emphasizes the importance of prioritizing data governance as part of the company’s broader AI strategy, ensuring that the organization is not only investing in cutting-edge technology but also laying the foundation for its effective and responsible use.
Board Actions for Effective AI Governance
In the coming year, boards will face increasing scrutiny on how AI is being leveraged within their organizations, both in terms of the value it delivers and its responsible management to mitigate risks and foster trust. This dynamic highlights the need for robust AI governance to navigate emerging challenges and position organizations for sustainable success. There are several ways that boards can engage in reshaping governance and oversight structures to ensure alignment with these evolving demands.
- Understand who is responsible for AI governance, and ensure that leaders are actively championing it. All members of the C-suite should be prepared to articulate the organization’s general approach to AI governance and understand their role in supporting its successful implementation, but specific leaders may be implicated based on the nature of the AI adoption.
Chief Executive Officer
|
Ensures AI investments align with corporate strategy and generate returns
|
Chief Technology Officer
|
Guides the strategic direction for AI adoption and utilization
|
Chief Risk/Compliance Officer
|
Anticipates and addresses enterprise-level AI system risks
|
Chief Legal Officer
|
Monitors and prepares for the rapidly evolving policy and regulatory landscape
|
Chief Data Officer
|
Safeguards data integrity and privacy across the AI life cycle
|
Chief AI Officer
|
Integrates elements of these responsibilities while maintaining distinct focus areas
|
Without proper incentives and cross-functional participation, organizations risk reducing AI governance to a mere compliance exercise—or worse, engaging in “governance washing,” where responsible AI principles are professed but not meaningfully implemented. This undermines the potential of AI to create real value and can severely hinder organizational progress. Boards are uniquely positioned to stress the importance of leadership in driving this transformation.
- Understand how management has tuned governance approaches to the unique requirements of AI, with an emphasis on striking the right balance of enabling innovation and managing AI risks. Effective practices for responsible AI should make it easier for employees to innovate within safe, controlled environments, while also making it harder to use AI in ways that expose the company or society to risks. Boards should actively engage in discussions with cross-functional leadership, particularly those overseeing technical teams like data scientists and engineers, as well as risk-management and legal experts. These diverse perspectives are essential for evaluating how governance and risk functions influence innovation efforts, ensuring they lead to higher-quality AI solutions while fostering a deeper understanding of associated risks and impacts. Measuring AI governance is challenging, because it often involves quantifying avoided risks or the value gained through enhanced credibility. Boards should focus on indicators like customer trust in AI initiatives, organizational transparency, and the C-suite’s understanding of their role in governance to evaluate its implementation effectively.
Boards should focus on indicators like customer trust in AI initiatives, organizational transparency, and the C-suite’s understanding of their role in governance to evaluate its implementation effectively.
- Go beyond foundational AI literacy by prioritizing continuous upskilling, and—depending on your AI strategy—consider embedding governance into your organizational culture as a shared responsibility across all levels. As AI becomes increasingly integral to business operations, a basic understanding of AI is essential, but it is not sufficient for organizations to stay ahead in the rapidly advancing AI world. Board members should participate in regular training on various AI-related topics, including AI governance, and collaborate with experts to deepen their knowledge where needed. Similarly, boards should ensure that management is investing in upskilling all levels of the workforce. This includes providing access to the latest AI technologies, as well as comprehensive governance training to empower employees to use AI ethically and responsibly. For organizations aiming to make AI a core component of their business strategy, governance must become an integral part of the culture.
- Evaluate how AI shifts the company’s risk profile and establish reporting structures to capture how this risk is being managed. While it is impossible to anticipate and control every potential AI risk, boards should guide companies to understand the specific risks and impacts tied to their AI initiatives, such as AI hallucinations, data privacy concerns, or the risk of bias in AI models. Additionally, boards should ensure that management is considering how AI’s use across the broader industry may influence their own risk exposure, such as regulatory changes or evolving public perception. This may involve requesting regular risk assessments, identifying high-risk areas, and ensuring that mitigation strategies are built into the company’s overall risk management framework. These practices also better prepare the company for aligning with AI governance frameworks and standards, such as NIST’s AI Risk Management Framework, which provides additional guidance on how to effectively manage AI risks.
- Guide management to think about AI solutions in the context of their broader corporate strategy, and to consider data as a critical component of both. To remain competitive, companies must move beyond simply “using AI” to strategically leveraging it to advance their unique business objectives. Data governance is central to this effort. High-quality data can unlock opportunities for impactful AI solutions, while gaps in data readiness should inform governance priorities. Boards should guide management toward solutions that are strategically valuable, technically feasible, and operationally manageable—ensuring they advance corporate goals, leverage the right data and expertise, and align with the company’s risk tolerance and governance frameworks. Boards can foster this alignment by promoting interdisciplinary collaboration among management teams responsible for AI, data, and corporate strategy.
Questions Directors Can Ask
- What steps are management taking to ensure that AI governance is culturally embedded throughout the organization, rather than just a top-down policy approach?
- How are we aligning our AI strategy with the company’s broader corporate goals?
- What is the company’s approach to ensuring the quality and readiness of data for AI, and how are we addressing data gaps that could impact our AI goals?
- How are we balancing the need for innovation with management of AI-related risks across the organization?
- What processes do we have in place to ensure our AI governance model remains agile and resilient in the face of rapid technological and regulatory changes?
- Who within the organization is responsible for managing and reporting to the board on the AI program and its success?
- Has the organization delegated AI oversight to committees (drawing on board and senior leadership), outlining which aspects of AI governance should be managed by each committee?
- Does the board currently possess sufficient AI knowledge, or is there a need for targeted training to enhance their understanding and oversight capabilities?
Defining key terms for AI governance
- AI governance: A set of processes, policies, and standards that guide the responsible and effective use of AI across an organization
- Responsible AI: The practice of developing, deploying, and using AI technologies in a way that prioritizes ethical principles such as fairness and transparency, supported by a governance framework that enforces these practices
- AI ethics: A discipline that aims to address the moral implications of AI, forming an ethical foundation for AI governance to ensure AI technologies align with societal values
- AI principles: Guiding values such as fairness, privacy, transparency, and accountability that serve as the basis for AI governance frameworks
- AI risk assessment: A governance practice that involves identifying and managing the inherent and residual risks associated with AI systems across their life cycle
|