No standards body has defined AI Governance. No regulation locks it. After reviewing every major framework, here is the definition the field is missing.
The phrase “AI Governance” appears in international treaties, executive orders, corporate reports, and academic handbooks. More than 40 countries have adopted governance principles through the OECD. The European Union built an entire regulatory architecture around it. ISO published standards for it. NIST built a risk management framework with governance as its first function.
None of them defined it.
Not ISO. Not NIST. Not the OECD. Not the EU AI Act. Not UNESCO. The term carries enormous institutional weight while remaining operationally undefined. That gap determines whether “AI Governance” means something enforceable or something decorative.
This article traces where the term came from, documents what every major framework says about it, identifies what none of them provide, and proposes the definition the field needs.
Where Did the Term AI Governance Come From?
The roots go back further than most assume. Mitchell Waldrop coined the term “machine ethics” in a 1987 AI Magazine article titled “A Question of Responsibility.” The AAAI held its first symposium on machine ethics in 2005. Luciano Floridi and Josh Cowls built an ethical framework around four bioethics principles plus an AI-specific addition: explicability.
These early conversations asked what values a machine should reflect. This was Ethical AI. It addressed internal disposition, independent of who enforced anything.
The policy acceleration started in 2016. Anna Jobin, Marcello Ienca, and Effy Vayena documented this in a 2019 Nature Machine Intelligence study mapping 84 sets of AI ethics guidelines worldwide. Their finding: 88 percent appeared after 2016. Convergence emerged around five principles (transparency, justice, non-maleficence, responsibility, privacy). Divergence in how those principles translated into action was vast.
The institutions arrived the same year. The Partnership on AI launched in 2016, convening Apple, Amazon, Google, Facebook, IBM, and Microsoft. The IEEE started its Global Initiative on Ethics of Autonomous and Intelligent Systems. Matthijs Maas, in a 2025 Oxford Academic study, documented that international AI governance debates started in the early 2010s but gained sustained momentum in 2016.
Then the corporate wave hit. A peer-reviewed PMC analysis established the timeline: after sporadic calls from 2016 onward, corporate AI principles flooded in at the start of 2018. Google published its AI Principles in June 2018. Microsoft formalized its Responsible AI Standard. IBM followed.
“Responsible AI” became a department name. A team title. A product feature. It described something real: the engineering discipline of building ethical values into technical systems. Bias detection. Fairness metrics. Explainability dashboards.
But Responsible AI and AI Governance are not the same thing.
What Does Every Major Framework Say About AI Governance?
Five institutional sources address governance. None define it.
ISO 37000: Governance as Accountability
ISO 37000:2021 defines organizational governance as the foundation for fulfilling purpose in an effective, responsible, and ethical manner. It establishes accountability structures and decision-making authority. This is governance broadly, not AI specifically.
ISO/IEC 38507: Governance Implications of AI
ISO/IEC 38507:2022 addresses what happens to governance when organizations introduce AI. It covers accountability, decision-making authority, data governance, and risk. But it describes governance implications. It does not define “AI governance” as a term.
ISO/IEC 42001: Management, Not Governance
ISO/IEC 42001:2023 is widely called “the AI governance standard.” It is not. It is a management system standard. Within ISO’s own architecture, management systems describe how organizations operate. Governance describes who holds decision authority and who is accountable. ISO 42001 provides the management layer. ISO 37000 provides the governance layer. The conflation between these two is itself a symptom of the definitional problem.
NIST AI RMF: Governance as a Function
The NIST AI Risk Management Framework (2023) uses four functions: GOVERN, MAP, MEASURE, and MANAGE. GOVERN operates as a cross-cutting function setting organizational culture and risk posture. But NIST does not define AI governance. It defines a governance function within risk management.
OECD AI Principles: Governance as Values
The OECD Recommendation on Artificial Intelligence (2019, updated 2024) comes closest to a definitional layer. Adopted by more than 40 countries and referenced in the EU AI Act’s definition of AI systems, the OECD centres on inclusive growth, human-centred values, transparency, robustness, and accountability. Charlotte Stix traced this terminological history in a 2022 Discover Artificial Intelligence paper and found that different groups coined different terms with varying objectives. The OECD provides principles. It does not provide a definition.
EU AI Act: Governance as Compliance
The EU AI Act (2024) creates binding obligations. Article 14 mandates that high-risk AI systems be designed for effective human oversight by natural persons. The Council of Europe’s Framework Convention on AI, signed by 19 parties as of January 2026, establishes treaty-level governance principles. Both create legal requirements. Neither defines governance itself.
Five frameworks. Five lenses. Zero locked definitions.
Why Does AI Governance Have No Formal Definition?
The absence is structural, not accidental.
Standards bodies define management systems because that is what they certify. Risk frameworks define risk functions because that is what they measure. Intergovernmental bodies define principles because that is what they negotiate. Regulators define compliance obligations because that is what they enforce.
No institution has the mandate, the incentive, or the cross-domain authority to publish a unified definition spanning all five lenses.
This creates a practical problem. When an organization says it has “AI governance,” what does it have? A management system? A risk posture? Value alignment? Regulatory compliance? The term accommodates all of these. It clarifies none of them.
Six peer-reviewed studies across four journals confirmed the definitional gap from different angles. Jobin, Ienca, and Vayena (2019) mapped 84 AI ethics guidelines worldwide in Nature Machine Intelligence and found convergence on principles but vast divergence on implementation. Stix (2022) traced the terminological history in Discover Artificial Intelligence and found that different groups coined different terms with different objectives, none of which produced a locked definition. Maas (2025) documented in an Oxford Academic study that international AI governance debates gained momentum in 2016 without producing definitional consensus. Batool, Zowghi, and Bano (2025) conducted a systematic literature review in AI and Ethics and found governance solutions fragmented across five levels with limited interoperability. Floridi and Cowan (2025) argued in the same journal that the persistent failure to move from principles to enforceable procedures remains the field’s central gap. Goffi (2022) warned in Revista Misión Jurídica that dominant governance frameworks embed Western-centric assumptions without engaging diverse philosophical traditions. Taken together, the academic record does not just suggest the absence of a formal definition. It documents it.
A 2025 systematic literature review by Batool, Zowghi, and Bano in AI and Ethics confirmed the fragmentation: existing governance solutions operate at team, organization, industry, national, and international levels with limited interoperability across layers.
Floridi and Cowan argued in the same journal that moving from principles to enforceable procedures remains the field’s persistent gap. Many organizations adopt the vocabulary of governance without building operational infrastructure. Ethical language without enforcement mechanisms. Responsible AI teams without stop authority. Governance frameworks without anyone who answers for outputs.
Some call this governance washing.
What Are the Criticisms of Current AI Governance Frameworks?
The shortcomings run deeper than missing definitions.
A 2019 analysis found that 88 percent of published AI ethics guidelines came from Europe and North America, raising concerns that global governance norms reflect a narrow set of cultural and philosophical assumptions. Emmanuel Goffi, writing in Revista Misión Jurídica (2022), argued that dominant AI governance narratives embed Western-centric universalism without genuine engagement with diverse philosophical traditions including African Ubuntu, Asian relational ethics, and Indigenous knowledge systems.
Corporate responsible AI pledges often lack external audit mechanisms. The gap between published principles and measurable compliance remains wide. A peer-reviewed PMC analysis documented that corporations such as Microsoft, IBM, and Google began to realize governmental regulation was unavoidable, but their early principles operated without enforcement teeth.
In the United States, the absence of comprehensive federal AI legislation has left governance dependent on executive orders, which can be rescinded by subsequent administrations. Executive Order 14110 (October 2023) directed federal agencies to manage AI risks. Executive Order 14148 (January 2025) rescinded it. Sectoral regulators (FDA, FTC, financial agencies) operate with mandates not designed for AI-specific risks.
The India AI Impact Summit in February 2026, the first in the global AI summit series hosted by a Global South nation, brought these tensions into institutional view. Discussions included the challenge of scaling governance standards developed in Western regulatory contexts to emerging economies with different institutional infrastructures.
The Batool, Zowghi, and Bano systematic review confirmed that existing governance solutions vary significantly across levels (team, organization, industry, national, international) with limited interoperability. No framework addresses all levels simultaneously.
What Is Missing From Every Existing Definition of AI Governance?
The institutional and academic evidence established the gap. A third layer of verification tested whether AI systems themselves could locate a formal definition. Eleven AI platforms were asked the same question through the HAIA-RECCLIN governance methodology: Claude, ChatGPT, Gemini, Grok, Perplexity, Mistral, DeepSeek, Meta AI, CoPilot, Kimi, and MiniMax. Every platform independently confirmed that no canonical definition of AI Governance exists in the global standards landscape. Three platforms (Claude, Perplexity, and Gemini) were then tasked with synthesizing the institutional sources into a composite definition. All three converged on the same structural conclusion: the raw material exists across ISO, NIST, OECD, the EU AI Act, and UNESCO. No body assembled it into a single operational statement.
The institutional sources, taken together, point toward a composite definition that no single body has published.
From ISO 37000: accountability structures and decision-making authority.
From the OECD: alignment with human rights and the principle that humans remain ultimately responsible.
From NIST: organizational culture and risk posture that set the conditions for oversight.
From the EU AI Act Article 14: natural persons who can effectively oversee high-risk systems during use.
What none of them states explicitly but all of them imply: governance requires a named human who can stop the process, review the output, and answer for the result. Not a team. Not a department. A person whose identity is attached to the decision and whose judgment can be traced in an audit.
That accountability operates through four channels: moral obligation to act with care, employment consequence when judgment fails, civil liability when harm results, and criminal exposure when recklessness causes injury.
If accountability cannot reach a named person through at least one of these channels, what exists is process, not governance.
In high-velocity environments where outputs occur in milliseconds, such as algorithmic trading or automated cybersecurity response, the checkpoint shifts from individual output review to the governance policy that authorizes automated action. The named human signs the policy. The four accountability channels apply to that signature. Speed does not eliminate governance. It moves the checkpoint upstream.
Everything below that threshold is Responsible AI. The engineering may be rigorous. The monitoring may be sophisticated. The principles may be sound. But without a named human who answers for outputs, the system governs itself. That is not governance. That is automation with guardrails.
The Pathway Model: From Ethical AI to AI Governance
The relationship between these terms is sequential, not interchangeable.
Ethical AI is the origin. Value formation, character architecture, constitutional grounding. All AI begins here. This layer concerns what values the system should reflect and how it should reason about moral questions.
Responsible AI evolves from Ethical AI. Character formation leads to technical implementation. Values become safeguards. Constitutional principles become monitoring systems. This evolution is real and valuable. Responsible AI has a ceiling: the absence of individual human oversight. Even sophisticated Responsible AI with random spot-checking remains machine checking machine. No human stands accountable for individual outputs.
Some AI appropriately remains at Responsible AI permanently. Consumer chatbots, recommendation engines, writing assistants, code completion tools. These operate at scale incompatible with human checkpoint authority over individual outputs. That is not failure. That is appropriate placement based on reversibility, stakes, and scale.
AI Governance requires transformation. A named human holds binding authority at specific checkpoints, with personal accountability that survives audit. This pathway is required for sensitive domains regardless of efficiency cost: military applications, criminal justice, healthcare decisions, critical infrastructure, any domain where outputs carry irreversible or high-stakes consequence.
The pathway from Responsible AI to AI Governance is open. The key that opens it is individual human authority over individual outputs with named accountability.
A Proposed Definition of AI Governance
The field needs a locked definition. The institutional sources provide the raw material. This is a proposed assembly.
AI Governance Defined (Full Scope):
AI Governance is the system of decision authority, accountability structures, and oversight mechanisms through which named humans hold binding power to approve, modify, or halt AI outputs at defined checkpoints, with accountability that survives audit through moral, employment, civil, and criminal channels.
AI Governance Defined (Applied):
AI Governance exists when a qualified human holds binding authority at specific checkpoints, with personal accountability for the outputs that pass through.
How Is AI Governance Different from Responsible AI?
Responsible AI describes the engineering discipline of building ethical values into AI systems. AI Governance adds the human. A named person holds decision authority over outputs, with accountability that can be audited. Responsible AI shapes the machine. AI Governance answers for it.
How Is AI Governance Different from AI Ethics?
AI Ethics establishes the values a system should reflect. AI Governance enforces those values through human decision authority at defined checkpoints. Ethics is the compass. Governance is the hand on the wheel.
How Is AI Governance Different from AI Risk Management?
AI Risk Management identifies and mitigates risks across the AI lifecycle. AI Governance assigns a named human with binding authority to act on those risks before outputs reach consequence. Risk management maps the terrain. Governance decides who walks it.
How Is AI Governance Different from AI Management?
AI management describes how organizations operate AI systems through processes, controls, and lifecycle oversight. This is what ISO/IEC 42001:2023 provides. AI Governance describes who holds decision authority and who is personally accountable for AI outputs. Management is operational. Governance is constitutional. The conflation of management and governance is one of the most common errors in the field.
How Is AI Governance Different from AI Compliance?
AI compliance means meeting specific legal or regulatory requirements imposed by authorities such as the EU AI Act. Compliance is about satisfying external mandates. Governance is about establishing internal authority structures that ensure compliance happens because a named human is responsible for it. An organization can be compliant without being governed if no one personally answers for the outputs that achieve compliance.
How Is AI Governance Different from Trustworthy AI?
Trustworthy AI describes the desired quality of AI systems: transparent, explainable, accountable, robust, fair, and aligned with human goals. It is a description of what an AI system should be. AI Governance is the mechanism through which trustworthiness is enforced. Without governance, trustworthiness is an aspiration. With governance, it is an auditable standard with a named human attached.
What Are the Four Channels of AI Governance Accountability?
Moral obligation to act with care. Employment consequence when judgment fails. Civil liability when harm results. Criminal exposure when recklessness causes injury.
These four channels distinguish AI Governance from every other term in the field. Responsible AI has no named person who faces employment termination for a bad output. AI Ethics has no civil liability channel. AI Risk Management has no criminal exposure for recklessness. AI Governance requires all four channels to reach a named human. Remove any one, and what remains is a weaker form of oversight. Remove all four, and what remains is branding.
How Does AI Governance Work in High-Speed AI Systems?
In high-velocity environments where outputs occur in milliseconds, such as algorithmic trading or automated cybersecurity response, the governance checkpoint shifts from individual output review to the governance policy that authorizes automated action. The named human signs the policy. The four accountability channels apply to that signature.
Speed does not eliminate governance. It moves the checkpoint upstream. The question changes from “who reviewed this output?” to “who authorized the policy that permits this class of outputs?” That authorization carries the same moral, employment, civil, and criminal accountability as a manual review.
What Is the Council of Europe Framework Convention on AI?
The Framework Convention on Artificial Intelligence, opened for signature on 5 September 2024 under the Council of Europe, is the first binding international treaty addressing AI governance. It establishes principles including transparency, accountability, non-discrimination, and human rights protection. As of January 2026, 19 parties had signed, including the European Union. The convention complements the EU AI Act by establishing treaty-level governance obligations beyond the European Union’s regulatory jurisdiction.
What Is Governance Washing?
Governance washing occurs when organizations adopt governance vocabulary without building operational enforcement mechanisms. This includes publishing ethical principles without stop authority, naming Responsible AI teams without decision rights, creating governance frameworks without anyone personally accountable for outputs, and claiming AI governance certification through management system standards (ISO 42001) while lacking the governance layer (ISO 37000) that assigns decision authority and accountability.
Floridi and Cowan (2025) documented this as the persistent gap between principles and enforceable procedures. The proposed definition of AI Governance provides an operational test: if no named human holds binding authority at defined checkpoints with personal accountability through four channels, what exists is not governance regardless of what the organization calls it.
Who Proposed This Definition of AI Governance?
Basil C. Puglisi, MPA, proposed this definition in 2025 as part of the HAIA-RECCLIN framework for human-AI governance and the Checkpoint-Based Governance (CBG) methodology. The definition synthesizes ISO 37000 (accountability structures), OECD AI Principles (human responsibility), NIST AI RMF (governance function), and EU AI Act Article 14 (human oversight by natural persons) into a single operational statement.
The definition fills a documented gap: no standards body, regulatory authority, or academic consensus document has published a formal, standalone definition of AI Governance despite the term appearing in treaties, executive orders, and international standards across more than 40 countries.
Puglisi holds an MPA from Michigan State University and retired from the Port Authority Police Department after twelve years of service. The HAIA-RECCLIN framework operates across 11 AI platforms with human arbitration as a constitutional requirement for all AI-assisted decisions.
Sources
Batool, A., Zowghi, D., & Bano, M. (2025). AI governance: a systematic literature review. AI and Ethics, 5, 3265-3279. https://doi.org/10.1007/s43681-024-00653-w
Floridi, L., & Cowan, S. (2025). Operationalizing accountability in AI governance: From principles to procedures. AI and Ethics, 5(2). https://doi.org/10.1007/s43681-025-00438-2
Goffi, E. R. (2022). Respecting cultural diversity in ethics applied to AI: A new approach for multicultural governance. Revista Misión Jurídica.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389-399. https://doi.org/10.1038/s42256-019-0088-2
Maas, M. M. (2025). Architectures of Global AI Governance: From Technological Change to Human Choice. Oxford Academic. https://doi.org/10.1093/9780191988455.003.0001
Stix, C. (2022). Artificial intelligence by any other name: A brief history of the conceptualization of “trustworthy artificial intelligence.” Discover Artificial Intelligence, 2(26). https://doi.org/10.1007/s44163-022-00041-5
Waldrop, M. (1987). A question of responsibility. AI Magazine.
Companies committed to responsible AI: From principles towards implementation and regulation? Philosophy & Technology (2021). PMC 8492454.
Council of Europe. (2026, January 28). Armenia signs the Framework Convention on Artificial Intelligence. https://www.coe.int/en/web/artificial-intelligence/-/armenia-signs-the-council-of-europe-s-framework-convention-on-artificial-intelligence-1
European Union. (2024). Artificial Intelligence Act, Article 14: Human Oversight. https://artificialintelligenceact.eu/article/14/
International Organization for Standardization. (2021). ISO 37000:2021 Governance of Organizations. https://www.iso.org/standard/65034.html
International Organization for Standardization. (2022). ISO/IEC 38507:2022 Governance Implications of the Use of AI by Organizations. https://www.iso.org/standard/56641.html
International Organization for Standardization. (2023). ISO/IEC 42001:2023 AI Management Systems. https://www.iso.org/standard/42001
National Institute of Standards and Technology. (2023). AI Risk Management Framework (AI RMF 1.0). https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
National Institute of Standards and Technology. (2024). Generative AI Profile (NIST AI 600-1). https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
OECD. (2019). Recommendation of the Council on Artificial Intelligence. https://oecd.ai/en/ai-principles
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://www.unesco.org/en/legal-affairs/recommendation-ethics-artificial-intelligence
Leave a Reply
You must be logged in to post a comment.