A Governance Practitioner’s Examination of the Diary of a CEO Interview and Empire of AI

A journalist with engineering training spent eight years investigating the AI industry and concluded that the major companies operate as empires. A governance practitioner who builds open-source infrastructure for the same industry watched the two-hour interview where she made that case and found that some of her claims hold under scrutiny while others do not. Both observations matter, because governance that accepts evidence uncritically is no better than governance that ignores it.
Karen Hao’s conversation with Steven Bartlett on The Diary of a CEO in March 2026 has generated significant public attention. The interview draws on her book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, which won the National Book Critics Circle Award for Nonfiction, reached the New York Times bestseller list, and earned Hao a place on TIME’s TIME100 AI list. The claims she makes deserve the same rigor she applies to the companies she investigates.
This paper examines each major claim from the interview, tests it against available evidence, identifies where it holds and where it does not, and maps the strongest findings to published governance architecture that addresses the structural problems she documents.
The governance frameworks referenced in this paper require context. AI Provider Plurality, Checkpoint Based Governance (CBG), GOPEL, and HAIA-CAIPR are published open-source at github.com/basilpuglisi/HAIA under Creative Commons license. They are working concepts: specified architecture with documented operational evidence, submitted to Congress and published on SSRN, but not yet enacted, peer-reviewed, or production-validated. Other governance approaches exist, including the EU AI Act’s human oversight clauses, centralized licensing models, and standards-body certification frameworks. The frameworks referenced here represent one implementation path among those that may emerge. The AI Provider Plurality Congressional Package was submitted to the 119th Congress in February 2026. None of this work is affiliated with any corporation or funded by any investor.
Who Karen Hao Is
Hao trained as a mechanical engineer at MIT, joined a Google[x] spin-out startup in Silicon Valley focused on climate technology, and watched the board fire the CEO because the company was not profitable. That experience reoriented her career toward journalism, where she could ask the questions the innovation ecosystem was not asking of itself.
She served as senior AI editor at MIT Technology Review from 2018 to 2022, where she created The Algorithm newsletter (Webby Award nominee, 2019) and co-produced In Machines We Trust (Front Page Award, 2020). She moved to the Wall Street Journal in 2022 as a foreign correspondent covering China’s technology industry, then left in 2023 to write Empire of AI full-time.
Her investigative work has been cited by Congress in five separate hearings or documents. The most significant pieces include her coverage of Dr. Timnit Gebru’s firing from Google, her nine-month investigation into Facebook’s responsible AI team, and her reporting on how Facebook and Google fund global misinformation, which was entered into the Congressional record.
She co-created the Pulitzer Center’s AI Spotlight Series, sits on the board of the AI Now Institute, and holds fellowships from Harvard, MIT Knight Science Journalism, and the Pulitzer Center’s AI Accountability Network. Business Insider named her to its 2026 AI Power List.
Empire of AI draws on more than 300 interviews with roughly 260 people, including over 90 former or current OpenAI employees and executives. OpenAI declined to cooperate and did not respond to forty pages of requests for comment.
That credibility is exactly why her claims require testing rather than wholesale acceptance. The interview covers nine major claims, and the evidence supports some more strongly than others.
The Claims That Hold
Five of Hao’s claims survive scrutiny with the evidence available as of March 2026. Each is documented, corroborated by independent sources, and carries structural implications for governance architecture.
Knowledge Production Control
Hao argues that AI companies control the research ecosystem by funding the scientists who study their own systems and censoring researchers who produce inconvenient findings. She documents this with the Gebru case at Google, where an ethical AI team co-lead was fired after co-authoring a research paper critical of large language models. Google also fired Gebru’s co-lead, Margaret Mitchell.
This claim is factual, documented, and corroborated by multiple independent sources. Nine members of Congress, including Senators Elizabeth Warren and Cory Booker, cited Hao’s reporting when demanding answers from Google. Some companies have since made reforms to their AI ethics structures, and the structural risk of capture remains because the incentive architecture has not changed: when the funding source and the research subject are the same entity, the conditions for capture exist regardless of individual intentions.
Hao extends the argument to journalism, documenting how AI companies use access as currency. They offer interviews, office visits, and product previews to journalists who produce favorable coverage, and withdraw all of it from those who do not. OpenAI shut the door on Hao in 2020 after her MIT Technology Review profile displeased the leadership, and later dangled then withdrew cooperation on her book. This access economy functions as a filtration system that shapes which information reaches voters and policymakers before any governance conversation begins.
It is worth noting that Hao’s reporting itself constitutes a form of information concentration in reverse. She holds the internal documents and controls access to them. Her book is the sole published source for several claims this paper evaluates. If knowledge production control is a governance risk when companies practice it, the structural observation applies to any entity that holds primary source material without independent replication, and that includes investigative journalists.
This finding supports the case for source-authority discrimination in audit trails. GOPEL’s three-tier distinction (Tier 0 = human arbiter, Tier 1 = AI platform, Tier 2 = synthesizer) ensures that the provenance of every input is preserved and that no single source controls the governance layer. Hao’s evidence extends this principle beyond AI outputs to the research ecosystem itself.
AGI Definition Shifting
Hao argues that OpenAI has described artificial general intelligence differently across audiences. One definition appears in the OpenAI Charter: “highly autonomous systems that outperform humans at most economically valuable work.” A financial threshold tied to one hundred billion dollars in revenue has been reported in coverage of the Microsoft agreement. The Congress and consumer characterizations (curing cancer and solving climate change for Congress, best digital assistant for consumers) come from Hao’s interview framing of her documented reporting.
These descriptions approximate how OpenAI has framed AGI to different audiences, rather than being verbatim quotes from a single document. The variation is not emphasis or context but fundamental incompatibility. A system that cures cancer operates in a different technical domain than a digital assistant, which operates in a different economic domain than a revenue threshold. When the definition of the governed object shifts depending on the stakeholder being addressed, any governance framework that accepts the company’s own characterization has already ceded the foundational question.
This is the operational case for why governance frameworks must define the governed object independently of the developer’s self-description. The AI Provider Plurality Congressional Package addresses this by mandating API accessibility and multi-provider comparison, deriving the definition of what a system does from observable behavior across platforms rather than from any single company’s marketing.
Revenue-Driven Capability Selection
Hao describes internal documents showing that AI companies select which model capabilities to advance based on which industries and countries will pay the most. Finance, law, medicine, healthcare, and commerce receive focused development because those markets generate revenue. Some developers also fund public-benefit projects such as AI for climate research and health applications, and the structural incentive toward revenue-producing capabilities remains the dominant pattern.
This claim is supported by internal documents obtained through Hao’s reporting, and the structural incentive is consistent with observable behavior across the industry. Capability advancement is not a neutral march toward general intelligence but a commercially directed process where the language of general intelligence provides cover for market-driven product development.
This finding maps directly to what the HAIA ecosystem documents as the Economic Override Pattern: the observable dynamic where corporate incentives systematically prioritize capability advancement over safety validation, and where profit maximization and competitive pressure create predictable governance failures absent mandatory accountability structures (Governing AI: When Capability Exceeds Control, Puglisi, 2025, Chapter 2). The Economic Override Pattern is a Tier 2 working concept: a framework supported by observable evidence (including a 2025 EY survey documenting that 76% of organizations deploy agentic AI while only 33% maintain responsible AI controls) but not yet independently validated as formal theory. Hao’s internal documents are primary source evidence for the pattern that governance architecture must treat as structural.
Data Annotation Labor Conditions
Hao describes the data annotation industry as a system that absorbs workers displaced by AI, pits them against each other for speed and cost, atomizes their work, and strips dignity and agency. She cites accounts from a New York Magazine article documenting award-winning directors, PhD holders, and law graduates performing annotation work under conditions that prevent them from meeting basic family obligations.
The labor conditions in data annotation have been extensively documented by Rest of World, Time, the Washington Post, and academic researchers. Worker-cooperative and union-affiliated annotation projects exist but are not yet the norm. The structural incentives Hao describes are real: third-party annotation firms compete on speed and cost, which drives working conditions downward. Some reviewers have noted that individual accounts may not be representative of the broader worker population, and the structural argument holds even with that qualification because the incentive architecture produces downward pressure on conditions regardless of whether every worker experiences the worst outcomes.
Environmental Externalities
Hao documents the environmental impact of AI data centers, including power consumption at the Abilene Stargate facility, freshwater competition with drought-stressed communities, and emissions from the 35 gas turbines powering Musk’s Colossus facility in Memphis.
In late 2025, Hao acknowledged a unit conversion error that overstated one Chilean water use figure by a factor of 1,000. That correction matters, but it applies only to the Chilean estimate. Her broader reporting on power and water use in other AI infrastructure remains substantiated by independent sources. The Memphis Colossus and Abilene Stargate projects are independently verifiable through Bloomberg, Reuters, local media, and public filings.
The Claims That Require Challenge
The remaining four claims contain structural insights worth preserving, but each carries evidentiary or analytical weaknesses that governance practitioners should weigh before accepting them as settled.
The Empire Analogy
Hao frames the major AI companies as empires, identifying four characteristics: resource claiming, labor exploitation, knowledge production monopoly, and a mythology of necessity. She draws a direct comparison to historical colonial empires, particularly the British East India Company.
The analogy captures real structural dynamics, particularly around resource extraction and knowledge production control. However, it breaks down under closer examination. AI companies lack the military enforcement that defined historical empires. They operate within legal jurisdictions that can and do constrain them (the EU AI Act has forced compliance changes across the industry). They face genuine market competition from each other and from open-source alternatives. Multiple reviewers have noted these limitations: Masood (2025) argues that the “empire” metaphor makes OpenAI appear uniquely culpable rather than recognizing that peers follow similar industry playbooks. Johnson (2025) notes that the analogy is not fully drawn out in the book, with only the East India Company parallel developed in detail.
The analogy works best as a structural lens for understanding power concentration and works worst when pushed toward literal comparison with empires that killed, enslaved, and displaced populations by force of arms. Governance practitioners should use the structural insight while maintaining the distinction.
Self-Driving Car Predictions
Hao expresses skepticism that most cars in the United States will be autonomous within ten years. She argues that statistical engines make errors by nature, cannot generalize across locations without retraining, and that social trust and legal liability remain unsolved.
The safety data in Waymo’s operational environments has improved substantially. Waymo reports more than 170 million fully autonomous miles and substantially lower serious injury crash rates than human driver benchmarks in the cities where it operates. A peer-reviewed study published in Traffic Injury Prevention at 56.7 million miles found 91% fewer serious-injury-or-worse crashes than human benchmarks, while Waymo’s later safety update reports 92% fewer serious-injury-or-worse crashes across a larger mileage base. Swiss Re independently concluded that Waymo vehicles produce 92% fewer bodily injury claims over 25 million miles.
These figures require context that the raw numbers do not carry. Waymo operates in five US cities under mapped Operational Design Domains with favorable weather conditions, predictable road layouts, and pre-mapped infrastructure. The comparison to human drivers is within those same environments, not a national benchmark. Place a Waymo vehicle on the streets of New York City during rush hour, in Bangkok’s unstructured traffic, on an unpaved mountain road in Peru, or in a Mumbai intersection where lane markings are suggestions rather than rules, and the safety data would look fundamentally different. The human drivers being compared against in Phoenix and Austin are not the same driving population as the human drivers in Lagos or Jakarta. This is not an apples-to-apples comparison, and presenting it as one overstates the evidence for universal autonomous deployment.
Hao’s claim that statistical engines “technically cannot stop making errors” is true in the abstract but incomplete without the comparative frame. Human drivers also cannot stop making errors, and within mapped operational domains, the data shows human error rates are substantially higher. Her broader points about generalization to unmapped environments, unresolved social trust, legal liability, and the gap between strong performance in five US cities and a national deployment timeline within ten years remain valid and are not addressed by the current safety data.
For governance, the question is not whether autonomous systems are perfect but whether accountability structures can manage statistical risk transparently across the full range of deployment environments. Multi-provider comparison and audit trails apply to autonomous vehicle governance as much as to language model governance.
The “Bicycles vs. Rockets” Framework
Hao uses AlphaFold as an example of a “bicycle of AI,” a system that uses small curated datasets, requires less compute, and provides enormous benefit at low cost. She contrasts this with large language models as the “rockets of AI.”
The distinction between application-specific AI and general-purpose models is legitimate. Task-specific systems trained on curated domain data often produce higher-value outcomes per unit of compute. However, the example undermines the structural argument. AlphaFold was built by DeepMind, a subsidiary of Google/Alphabet, one of the companies Hao identifies as an AI empire. It required training on Google’s TPU clusters with substantial compute infrastructure, and AlphaFold 3 uses a diffusion network architecture that demands significant computational resources (Abramson et al., 2024). The AlphaFold Protein Structure Database runs on Google Cloud.
The implication that “bicycles of AI” emerge outside the corporate infrastructure Hao critiques is not supported by the example she chooses. The governance question is not whether bicycles or rockets are better but whether governance architecture ensures that the choice of which to build reflects public need rather than private return. This is the structural question the Economic Override Pattern identifies: when commercial incentives determine which capabilities get funded, the choice between bicycles and rockets is made by the market, not by the public.
Intelligence Scaling
Hao frames the hypothesis that scaling AI models leads to greater intelligence as an unproven belief held by researchers who profit from it. She argues that the hypothesis drives unsustainable resource consumption without scientific validation.
The mechanism debate is real. Neuroscientists and psychologists do not universally agree that brains are statistical engines, and the claim that scaling neural networks will produce human-equivalent intelligence remains a hypothesis. Hao is correct to identify this as hypothesis rather than established science.
However, the capabilities produced by scaling are not hypothetical. Measurable improvements in coding, mathematical reasoning, multi-step planning, and language understanding have accompanied scale increases. Scaling has also produced new failure modes, including hallucination patterns that emerge at scale and adversarial vulnerabilities that grow with model complexity, which means governance must respond to both capability gains and novel risks. The “jagged frontier” that Hao references is real, but the frontier itself has moved substantially. Framing all scaling as pure myth because the underlying mechanism is debated conflates two separate questions: whether the mechanism explanation is correct and whether the capability improvements are real. The second is measurable, and the first remains open.
The capabilities are real enough to require governance, and the mechanism uncertainty is real enough to reject claims of inevitability. The policy response is to build governance infrastructure that works regardless of which hypothesis proves correct. The AI Provider Plurality Congressional Package proposes infrastructure that governs observable behavior across multiple platforms, independent of any single theory about how intelligence works.
The nine claims above produce a clear pattern. The strongest findings all point to structural problems that persist regardless of personnel, intentions, or individual company behavior. The weakest claims share a common flaw: they overextend a valid structural insight into territory where the evidence does not yet reach. The governance question is what infrastructure addresses the structural problems while remaining honest about where the evidence stops.
Where Governance Architecture Addresses These Findings
Hao’s strongest findings identify structural problems that require structural solutions. Her reporting documents what happens in the absence of governance infrastructure. The connections below distinguish three layers: Hao’s primary evidence, the interpretive pattern that evidence reveals, and the specific governance mechanism proposed in response. The mechanisms referenced are published working concepts (Tier 2: specified architecture with operational evidence, not yet production-validated or peer-reviewed). They represent one implementation path. Other approaches may address the same structural problems through different architecture.
AI Provider Plurality responds to the concentration of cognitive power and knowledge production control that Hao documents. Hao’s evidence shows that a small number of companies control the research agenda, the deployment infrastructure, and the public narrative. The interpretive pattern is that this concentration creates structural capture risk across research, journalism, and policy. The proposed mechanism is the AI Provider Plurality Congressional Package, which proposes three legislative actions: fund GOPEL as national AI infrastructure, mandate API accessibility for AI companies operating in the United States, and invest in small AI platforms to guarantee the competitive diversity that makes governance real. The structural framing is that it is not a proposal for more regulation but the engineering that makes less regulation safe. The government did not invent cars, phones, planes, or electricity, and it built the infrastructure that made them safe and accessible. AI requires the same structural approach.
The Economic Override Pattern names the dynamic Hao documents with internal evidence. Companies selecting capabilities by market revenue is the evidence. The pattern that corporate incentives systematically prioritize capability advancement over safety validation across all risk domains is the interpretation (Governing AI, Puglisi, 2025, Chapter 2). The infrastructure response is mandatory accountability structures that persist regardless of who leads the company, because Hao’s observation that swapping CEOs does not fix the structure aligns with the architectural claim that governance must address incentive structures, not personnel.
The Constitutional Wall Principle addresses the governance structure vulnerability Hao raises most directly. Decision-making power over billions of lives concentrated in individuals who share neither the culture, history, nor lived experience of the people affected is the documented condition. Physical presence at a decision point without substantive engagement with affected populations is not governance but administration at scale. The principle operates orthogonally to any single framework, applying equally whether the governance architecture is checkpoint-based, standards-based, or regulatory.
Multi-Provider Divergence addresses Hao’s documentation of knowledge production control by ensuring that no single provider, research ecosystem, or platform serves as the sole source of truth. Hao’s evidence shows that convergence within a captured ecosystem is unreliable. The interpretive pattern is that convergence without dissent is a red flag requiring human verification outside the ecosystem. The proposed mechanism is HAIA-CAIPR’s multi-provider architecture, where the same prompt is dispatched to multiple platforms simultaneously and disagreements are surfaced for human review. In documented working concept operations, platforms produced materially different outputs on identical prompts in 15 to 25 percent of cases, and those disagreements triggered human verification that prevented error propagation. Provider plurality only works if genuinely independent providers exist, which is why the Congressional Package’s investment in small AI platforms is a structural requirement, not an afterthought.
These four connections share a common thread. Each takes a problem Hao documents through investigative reporting and translates it into an infrastructure requirement that persists beyond any single company, leader, or policy cycle. Whether this specific architecture or a different one addresses the gap is a question the governance community must answer. That the gap exists is no longer in question.
What This Paper Does Not Claim
This paper does not claim that Hao’s work is wrong. It claims that her strongest findings deserve the infrastructure response they require, and her weakest claims deserve the scrutiny she applies to others.
This paper does not claim that the governance frameworks referenced here are the only valid approach. They are published, open-source, and stress-tested through multi-AI adversarial review, and other approaches may address the same structural problems through different architecture.
This paper does not claim neutrality in the sense of having no position. The position is that governance infrastructure is buildable, testable, and necessary, and that journalism like Hao’s provides the evidentiary base that makes infrastructure arguments credible. That is an alignment, and it is stated openly.
Frequently Asked Questions
What is Empire of Evidence and what does it examine?
Empire of Evidence is a white paper that examines nine major claims made by journalist Karen Hao in her March 2026 Diary of a CEO interview about her book Empire of AI. The paper tests each claim against available evidence, identifies five claims that hold under scrutiny and four that require challenge, and maps the strongest findings to published open-source AI governance architecture.
Which of Karen Hao’s claims does the paper find strongest?
The paper identifies five claims that hold: knowledge production control (AI companies funding and censoring researchers), AGI definition shifting across audiences, revenue-driven capability selection, data annotation labor conditions, and environmental externalities from AI data centers. Each is documented, corroborated by independent sources, and carries structural implications for governance.
Which of Karen Hao’s claims does the paper challenge?
Four claims require challenge: the empire analogy (illuminating but breaks down at literal comparison with colonial empires), self-driving car predictions (Waymo safety data in mapped domains is strong but does not validate nationwide deployment), the bicycles versus rockets framework (AlphaFold was built by Google DeepMind, undermining the implication that alternatives emerge outside corporate infrastructure), and intelligence scaling as pure myth (mechanism debate is real but capability improvements are measurable).
What is the Economic Override Pattern and how does it connect to Karen Hao’s findings?
The Economic Override Pattern, documented in Governing AI: When Capability Exceeds Control (Puglisi, 2025), identifies the structural dynamic where corporate incentives systematically prioritize capability advancement over safety validation. Karen Hao’s internal documents showing companies selecting capabilities by market revenue provide primary source evidence for this pattern. It is classified as a Tier 2 working concept supported by observable evidence including EY survey data.
What governance frameworks does the paper reference?
The paper references four frameworks from the HAIA ecosystem, all published open-source at github.com/basilpuglisi/HAIA: AI Provider Plurality (Congressional Package for federal AI infrastructure), Checkpoint Based Governance (constitutional authority framework), GOPEL (non-cognitive governance enforcement layer), and HAIA-CAIPR (cross-platform review protocol). All are working concepts, not production-validated systems.
Who is Karen Hao and what is Empire of AI?
Karen Hao is an MIT-trained mechanical engineer turned investigative journalist. She served as senior AI editor at MIT Technology Review and as a foreign correspondent at the Wall Street Journal. Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI (Penguin Press, 2025) draws on 300+ interviews including 90+ OpenAI insiders. It won the National Book Critics Circle Award for Nonfiction, reached the NYT bestseller list, and earned Hao a place on TIME’s TIME100 AI list.
References
Abramson, J., et al. (2024). Accurate structure prediction of biomolecular interactions with AlphaFold 3. Nature, 630, 493-500.
Bartlett, S. (Host). (2026, March 26). AI whistleblower: We are being gaslit by the AI companies! They’re hiding the truth about AI! [Video]. The Diary of a CEO. https://www.youtube.com/watch?v=Cn8HBj8QAbk
Dzieza, J. (2026, March 9). The laid-off scientists and lawyers training AI to steal their careers. New York Magazine, Intelligencer. https://nymag.com/intelligencer/article/white-collar-workers-training-ai.html
DeepMind. (n.d.). AlphaFold. Google DeepMind. https://deepmind.google/technologies/alphafold/
DesignWhine. (2025, October 4). Book review: Empire of AI by Karen Hao. https://www.designwhine.com/book-review-empire-of-ai-by-karen-hao/
Gallup & Special Competitive Studies Project. (2025, September 17). Americans prioritize AI safety and data security. Gallup. https://news.gallup.com/poll/694685/americans-prioritize-safety-data-security.aspx
Hao, K. (2020, February 17). The messy, secretive reality behind OpenAI’s bid to save the world. MIT Technology Review. Link
Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of Google. MIT Technology Review. Link
Hao, K. (2020, December 17). Congress wants answers from Google about Timnit Gebru’s firing. MIT Technology Review. Link
Hao, K. (2021, March 11). How Facebook got addicted to spreading misinformation. MIT Technology Review. Link
Hao, K. (2021, November 20). How Facebook and Google fund global misinformation. MIT Technology Review.
Hao, K. (2025). Empire of AI: Dreams and nightmares in Sam Altman’s OpenAI. Penguin Press.
Johnson, T. (2025, September 22). Book review of “Empire of AI” by Karen Hao. I’d Rather Be Writing. Link
Kim, M. (2025, August 13). Empire of AI by Karen Hao explores global costs of AI progress. Rest of World. Link
Kusano, K. D., Scanlon, J. M., Chen, Y. H., McMurry, T. L., Gode, T., & Victor, T. (2025). Comparison of Waymo rider-only crash rates by crash type to human benchmarks at 56.7 million miles. Traffic Injury Prevention, 26(sup1), S8-S20. Link
Masood, A. (2025, August 12). Empire of AI: Skepticism amid the hype. Medium. Link
National Book Critics Circle. (2026, March 26). National Book Critics Circle announces winners for publishing year 2025. Link
OpenAI. (n.d.). Charter. https://openai.com/charter/
Perrigo, B. (2023, January 18). Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. TIME. Link
Puglisi, B. C. (2025). Governing AI: When capability exceeds control. ISBN 9798349677687.
Puglisi, B. C. (2026). AI Provider Plurality: An infrastructure mandate for democratic AI systems. AI Provider Plurality Congressional Package. github.com/basilpuglisi/HAIA
Puglisi, B. C. (2026). Checkpoint Based Governance v5.0. github.com/basilpuglisi/HAIA
Puglisi, B. C. (2026). GOPEL: Governance Orchestrator Policy Enforcement Layer v1.5. github.com/basilpuglisi/HAIA
Puglisi, B. C. (2026). HAIA-CAIPR: Cross AI Platform Review specification v1.1. github.com/basilpuglisi/HAIA
Puglisi, B. C. (2026). HAIA framework architecture. SSRN Abstract ID 6195238. SSRN
Reuters. (2026, February 19). US civil rights group threatens to sue xAI over pollution. Reuters. Link
Slotkin, J. (2025). Waymo’s self-driving cars were involved in 91% fewer serious-injury-or-worse crashes [Opinion]. The New York Times.
TIME100 AI 2025: Karen Hao. (2025). TIME Magazine. Link
U.S. House of Representatives. (2021, December 1). Hearing record, Committee on Energy and Commerce. Link
Volpe, J. (2025). Karen Hao’s Empire of AI water use statistics. WIRED. Link
Waymo. (n.d.). Safety. https://waymo.com/safety/
Waymo. (n.d.). Safety impact. https://waymo.com/safety/impact/
Yahoo Finance. (2024). Microsoft, OpenAI financial definition of AGI. Link
Related Documents
- AI Provider Plurality Congressional Package
- GOPEL v1.5: The Non-Cognitive Governance Layer
- HAIA Ecosystem Overview
- GitHub Repository: HAIA
Basil C. Puglisi, MPA, is a Human-AI Collaboration Strategist and AI Governance practitioner operating independently via basilpuglisi.com. All frameworks referenced in this paper are published open-source at github.com/basilpuglisi/HAIA under Creative Commons Attribution-NonCommercial 4.0 International license. SSRN Abstract ID 6195238. #AIassisted
Basil C. Puglisi, MPA | basilpuglisi.com | March 2026
Methodology and Disclosure: This paper was developed under the HAIA-RECCLIN framework with the author as Tier 0 arbiter. AI platforms contributed to drafting, research, and structural review in assigned RECCLIN roles. The paper underwent adversarial review under the HAIA-CAIPR protocol across seven independent AI platforms (ChatGPT, Gemini, Perplexity, Grok, DeepSeek, Kimi, Mistral) with Claude conducting an eighth independent audit. All convergence findings and preserved dissent informed the final revision. #AIassisted
Leave a Reply
You must be logged in to post a comment.