• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Home
  • About Me
    • Teaching / Speaking / Events
  • Book: Governing AI
  • Book: Digital Factics X
  • AI – Artificial Intelligence
    • Ethics of AI Disclosure
  • AI Learning
    • AI Course Descriptions
  • AI Policy

@BasilPuglisi

Content & Strategy, Powered by Factics & AI, Since 2009

  • Headlines
  • My Story
    • Engagements & Moderating
  • AI Thought Leadership
  • Basil’s Brand Blog
  • Building Blocks by AI
  • Local Biz Tips

The U.S. Government Will Need to Seize AI Platforms and Data Centers if We Do Not Act

March 1, 2026 by Basil Puglisi Leave a Comment

The Warning, the Override, and the Infrastructure We Have Not Built

When Extinction Odds Meet National Security Logic, the Question Is Not Whether Government Acts but How

1. The Warning That Changes State Logic

A single probability estimate from a credible pioneer can change the posture of an entire state. Geoffrey Hinton, the 2024 Nobel laureate in Physics and widely recognized intellectual architect of deep learning, places the chance of AI driven human extinction at 10 to 20 percent within roughly three decades. That range appeared in interviews with the BBC in December 2024, remained consistent through CNBC and Observer reporting in mid 2025, and he repeats the same range across multiple major interviews and public remarks without revision. At the Ai4 conference in August 2025, Hinton stated that superintelligence could arrive in as little as five years and likely within twenty, while maintaining the 10 to 20 percent extinction probability. The probability held, but the clock moved closer.

This is not a fringe position. The AI Impacts 2022 expert survey found that nearly half of respondents put the probability of existential or extremely bad outcomes from advanced AI at 10 percent or higher. The AI Safety Clock, maintained by IMD Business School, registered 20 minutes to midnight as of September 2025. When a Nobel laureate repeats this estimate across multiple continents and venues, the statement functions less as a forecast and more as what one multi-platform research output described accurately: a permission slip. Once a figure of that stature says 10 to 20 percent chance of extinction, every national security institution in Washington has the raw material it needs to classify, monitor, and if necessary seize.

The move from probability to policy is not theoretical. National security posture does not wait for mathematical certainty. It responds to high consequence risk under uncertainty, especially when the risk is framed by credible insiders and amplified by great power competition. Nuclear deterrence doctrine operated on probabilities measured in fractions of a percent. Biodefense spending scales with scenarios that intelligence agencies rate as low likelihood but catastrophic consequence. The law does not need a speech doctrine bridge to justify attention. It already treats strategic technology as a domain of counterintelligence, supply chain protection, and priority collection. The public record reflects that shift in plain language, including an AI Action Plan that calls for prioritizing intelligence on foreign frontier AI projects and a national security memorandum that frames protection of the AI ecosystem against foreign intelligence threats as explicit policy. A 10 to 20 percent extinction probability, stated publicly by the technology’s own architect, provides institutional justification for government escalation that exceeds the threshold applied to every prior strategic technology.

The question is not whether government will act, because government is already acting. The question is what form that action takes.

2. The Security Reflex Already in Motion

The public record documents a government that already treats frontier AI as national security relevant infrastructure. Executive Order 14148, issued January 20, 2025, revoked the prior administration’s Executive Order 14110. Three days later, Executive Order 14179 established the replacement framework, reframing AI policy around removing barriers to American AI leadership while retaining national security provisions. The direction of travel became clearer with each subsequent release. America’s AI Action Plan, published by the White House in July 2025, explicitly prioritizes intelligence collection on foreign frontier AI projects with national security implications. The 2024 National Security Memorandum on AI went further, designating frontier models as dual use technology central to American power, directing agencies to integrate AI into national security missions, and elevating counterintelligence operations to protect AI infrastructure from foreign espionage. The intelligence community followed the same logic. The 2025 Annual Threat Assessment from the Director of National Intelligence treats AI explicitly as part of the threat environment, with significant material in the China section. And the institutional groundwork was already laid: the National Security Commission on Artificial Intelligence final report frames AI as central to great power competition, while NIST’s AI Risk Management Framework provides the government backed risk management structure meant to be adopted across the AI lifecycle.

These are not speculative scenarios; they are published federal policy.

As of late February 2026, the dispute between the Pentagon and Anthropic remains active and the legal and procurement consequences may evolve. But the governance signal is already clear enough to analyze: when the state frames frontier AI as national security infrastructure, the demand curve shifts toward access, control, and compliance.

On February 27, 2026, that demand became visible to everyone. The Pentagon gave Anthropic a deadline: allow the military to use its Claude model for all lawful purposes, without restriction, or face consequences. Anthropic held two positions: no mass domestic surveillance of Americans, and no fully autonomous weapons without human oversight. The Pentagon rejected both conditions. What followed was unlike anything in the modern history of government-industry relations. Defense Secretary Pete Hegseth designated Anthropic a supply chain risk to national security, a classification historically reserved for foreign adversaries and never before publicly applied to an American company. President Trump directed all federal agencies to cease using Anthropic’s technology.

The behind the scenes details sharpen the picture further. Axios reported that minutes before Hegseth posted the supply chain designation, a Pentagon undersecretary was on the phone offering Anthropic a deal that would have required allowing the collection and analysis of geolocation data, web browsing data, and personal financial information purchased from data brokers. This is the deal Anthropic refused.

Also according to Axios reporting, Claude was the only AI system operating on classified military networks at the time of the ban and was reportedly used in the operation to capture Nicolas Maduro. Defense officials told reporters that disentangling would be operationally costly. The government banned it anyway, because the company refused to permit mass surveillance and autonomous weapons.

The speed of what came next tells its own story. Hours after the designation, OpenAI announced it had reached agreement with the Pentagon for the same classified systems. Sam Altman stated that the contract includes the same core protections against mass domestic surveillance and fully autonomous weapons that Anthropic had demanded. But the framing differed in ways that matter. Where Anthropic sought to enshrine these restrictions as company imposed contractual prohibitions, OpenAI framed them as adherence to existing U.S. law and policy, allowing the Pentagon to accept the language under “all lawful purposes” while OpenAI claimed the protections were embedded in the agreement. OpenAI deployed a cloud-only safety stack, placed forward-deployed engineers with security clearances inside the Pentagon for oversight, and explicitly asked the government to extend the same terms to all AI companies and to resolve the Anthropic situation. Altman publicly called for de-escalation, and OpenAI stated that it opposes labeling Anthropic a supply chain risk.

The difference was not the outcome, since both companies claimed to prohibit mass surveillance and autonomous killing. The difference was the concession of oversight. OpenAI gave the Pentagon the feeling of operational control by embedding its own personnel inside the institution. Anthropic maintained that the company, not the government, held the contractual enforcement mechanism. The Pentagon punished the company that held the harder line and rewarded the one that found a way to say yes. And the speed of the transition reveals the competitive dynamic at its starkest: when one company holds a principled position, another is positioned to step in, capture the business, and frame the outcome as responsible collaboration. This is Economic Override operating through competitive displacement.

The pattern is clear: when government treats AI as a national security asset, the logic of access, control, and compliance follows. The question is not whether this reflex exists but whether it leads to accountable infrastructure or to coercive control that compounds the problem.

3. The Economic Override Pattern: Why Safety Teams Keep Leaving

The Economic Override Pattern, as defined in Governing AI: When Capability Exceeds Control (Puglisi, 2025, Chapter 2), describes the structural observation that corporate incentives systematically prioritize capability advancement over safety validation across all risk domains. Profit maximization, competitive pressure, and shareholder returns create predictable governance failures when mandatory accountability structures are absent. The mechanism is straightforward: when organizations face a choice between deploying faster and investing in governance, the economic incentives consistently win. This is not occasional or accidental; it is structural and repeatable.

The pattern classifies as a Tier 2 working concept, meaning it is a framework supported by observable evidence but not yet independently validated as formal theory. The underlying data, however, is Tier 1: established fact proven by others. A 2025 EY global survey found that 76 percent of organizations are currently using or planning to use agentic AI within the next year, yet only 33 percent maintain responsible AI controls. That gap between deployment speed and governance maturity is the Economic Override Pattern showing up as a measurable deficit in the public record.

The OpenAI Pentagon deal sharpens the point further. By asking the government to offer these same terms to all labs, OpenAI effectively commoditized safety. It turned safety into a standardized federal contract clause rather than a core corporate value. When safety becomes a negotiation chip in a competitive procurement process, the company that finds the most flexible way to say yes captures the market position. The company that holds the firmer line loses the contract and gets designated a national security threat. Every future AI company now knows exactly what compliance looks like and exactly what principled resistance costs.

The safety team departures at frontier AI labs provide the most visible evidence trail of Economic Override operating inside the organizations themselves.

OpenAI

In May 2024, OpenAI dissolved its Superalignment team after barely one year of operation, despite a public commitment to dedicate 20 percent of the company’s compute to the effort, a commitment that was never fulfilled. Ilya Sutskever, co-founder and chief scientist, departed, and Jan Leike, co-lead of the team, resigned and publicly stated that safety culture and processes have taken a backseat to shiny products. The departures kept coming. In October 2024, Miles Brundage left the AGI Readiness team, writing that neither OpenAI nor any other frontier lab is ready. Daniel Kokotajlo resigned after losing confidence in the company’s approach to responsible AGI development, later testifying before Congress. Time reported that Kokotajlo believed he forfeited significant equity to speak publicly, a signal of the personal cost attached to dissent inside these organizations. By August 2024, nearly half of OpenAI’s AGI safety team had departed, and subsequent departures have likely increased that figure. Ryan Beiermeister, a safety executive, was reportedly fired after opposing the rollout of an adult content mode, according to reporting by The Information. A separate researcher resigned citing deep reservations about the company’s advertising strategy.

Then the pattern repeated at a higher level. In late 2025 or early 2026, OpenAI disbanded its Mission Alignment team after roughly 16 months of operation. The team’s leader, Joshua Achiam, was reassigned to a chief futurist role with no operational authority, and the team itself was scattered across divisions.

One fact stands on its own. In a November 2025 IRS filing, OpenAI removed the word “safely” from its mission statement entirely. The original mission committed to AI that “safely benefits humanity, unconstrained by a need to generate financial return.” The revised version dropped both “safely” and the financial constraint language. That edit, buried in a tax filing during a for-profit restructuring contingent on billions in new funding, is the Economic Override Pattern reduced to a single word deletion. Three months later, the company signed a classified Pentagon contract. Whether the mission statement revision created the legal space for that contract or simply preceded it, the sequence speaks for itself.

Anthropic

On February 9, 2026, Mrinank Sharma, head of Anthropic’s Safeguards Research team, resigned publicly. His letter stated that the world is in peril and that throughout his time at Anthropic, he had repeatedly seen how hard it is to truly let our values govern our actions. Sharma described facing constant pressures to set aside what matters most.

The Pattern

The sequence repeats across organizations. Companies create safety teams with impressive mandates, then dissolve those teams when findings conflict with the product roadmap. Researchers forfeit equity and career position to issue public warnings. And the structure does not self correct, because the incentives that drive it are not bugs in the system; they are the system.

Hinton himself showed what it costs to break free. He left Google in 2023, explaining that he could not speak freely about AI risk while accepting their money and being influenced by what is in their own interest. He has not returned, and no for profit company with shareholders has offered him conditions under which he could do the work he believes is necessary.

Economic Override has sibling variants, and they matter for any institution that might claim to be exempt from the pattern. Government entities face the Political Override Pattern, where administration changes, budget politics, and sovereign interest compromise governance quality. Nonprofits face the Donor Override Pattern, where funder influence and board composition produce the same failure through different pressure points. All three variants produce the same structural outcome: concentrated authority without sufficient epistemic coverage, compromised by incentives external to governance quality.

4. Why Surveillance and Seizure Compound the Problem

When a state identifies a catastrophic risk and responds by forcing access, seizing systems, or expanding monitoring without transparent guardrails, it buys short term control at the cost of legitimacy, civil liberties, and the trust that sustains innovation. The Anthropic dispute makes the mechanism precise. The government demanded unrestricted access to a model operating on classified networks, the company held two ethical positions, and the government responded with an economic weapon, a supply chain risk designation, that threatens not just the company’s federal contracts but every commercial relationship the company holds with any entity that does business with the Pentagon.

This response does not address the underlying risk. It reinforces the structure that created it. The same handful of companies remain in control of the models, the same profit incentives continue to drive development, and the same safety teams continue to leave. The only change is that the remaining companies now face an additional incentive: comply with government demands or face economic destruction. The competitive race accelerates, and safety constraints become liabilities rather than features.

If society feeds the risk narrative exclusively through security institutions and for profit labs, the result is a closed, surveilled ecosystem, not a governed one. The nuclear parallel is instructive here. Nuclear safety combined strong state control with public law regimes, international treaties, and multilateral inspectors. The Atomic Energy Act of 1946 did not simply nationalize fissile material. It created the Joint Committee on Atomic Energy, a permanent congressional oversight body with classified access and independent technical staff. No analogous structure exists for AI. The current trajectory runs heavy on state control plus private contracts and light on public, plural, enforceable governance that is not captured by a handful of firms and security organs.

The risk is not one specific secret program but a predictable arc: catastrophic risk framing plus geopolitical competition plus private incentive misalignment produces state pressure for access, control, and surveillance. That arc is what governance must constrain.

5. Proof That Another Path Exists

The constructive claim is not that any single framework solves the problem but that alternatives to surveillance and corporate self regulation already exist as working concepts, published specifications, and legislative proposals. Their existence proves that the choice between trusting companies and trusting espionage is false. A third option is available: build public infrastructure.

GOPEL: Governance Without Cognition

GOPEL, the Governance Orchestrator Policy Enforcement Layer, is a non-cognitive governance agent that performs exactly seven deterministic operations: dispatch, collect, route, log, pause, hash, and report. It performs zero cognitive work, zero judgment, and zero content evaluation. The security logic follows directly from that design constraint: because the agent cannot think, no superintelligent model can corrupt it, and because it dispatches to multiple providers simultaneously, disagreement surfaces as signal rather than error. The reference implementation is published on GitHub under Creative Commons licensing with 14 source files, 9 test suites, and 183 tests with zero failures.

What makes GOPEL different from other governance proposals is what it refuses to do. The architecture eliminates cognition from the governance layer entirely, which removes the class of attack where adversarial inputs manipulate the judgment of a governance system. There are no weights, no training data, no emergent behaviors. Every operation is deterministic: given the same inputs, it produces identical outputs. The specification is public, the code is auditable, and the logs are platform-independent text files any AI system can read. It is closer to a traffic signal than a self-driving car.

The infrastructure parallels ground the concept in familiar governance logic. The FAA does not tell Boeing how to design wings; it requires flight data recorders. The SEC does not tell banks how to invest; it requires audit trails. GOPEL operates on the same principle: it does not evaluate AI outputs, but it ensures that humans see disagreement when it occurs, that every decision is logged with cryptographic integrity, and that no single provider controls the governance layer.

Consider the contrast with the current alternative. The OpenAI Pentagon deal relies on a trust-based model: a gentleman’s agreement that the cloud-based safety stack will hold, enforced by forward-deployed engineers whose employment depends on the company whose behavior they are meant to constrain. GOPEL replaces this trust-based architecture with a deterministic enforcement layer. If a red line is violated, a non-cognitive layer logs the breach and pauses the dispatch automatically, regardless of whether an engineer is in the room or a CEO is on the phone.

This is a proposed implementation, classified as Tier 2 infrastructure: a working concept supported by observable evidence but not yet validated at federal scale. Federal pilots are required. The point is not that GOPEL is the final answer but that non-cognitive governance infrastructure is buildable, testable, and publishable today.

AI Provider Plurality: Resilience as Policy

The AI Provider Plurality Congressional Package, published in February 2026 on GitHub and SSRN, translates the infrastructure argument into federal legislation. The package contains four documents: a summary brief, a constitutional and philosophical policy brief, a legislative framework with phased appropriations, and a technical appendix with the GOPEL specification and HAIA-RECCLIN operational model.

The package asks Congress to do three things. First, fund GOPEL as national AI infrastructure, beginning with a Phase 0 that requires no new appropriation and uses existing agency licenses and staff capacity. Second, mandate API accessibility for AI companies participating in federal procurement and operating in defined high consequence decision pipelines. Third, invest in small AI platforms through SBIR and STTR mechanisms to create the competitive supply that makes governance real.

The principle underneath is resilience engineering, not ideology. When multiple certified providers serve critical functions through a common governance interface, concentration risk drops, capture risk drops, and competitive pressure shifts toward safety features that are auditable rather than marketing claims. In documented working concept operations across several hundred tasks, platforms produced materially different outputs on identical prompts in 15 to 25 percent of cases. Those disagreements, surfaced by governance infrastructure, triggered human verification and prevented error propagation.

The infrastructure precedent is settled. The Interstate Highway System is infrastructure. So is the traffic management software that controls every signal. The electrical grid is infrastructure. So is SCADA. Air traffic control is infrastructure. The collision avoidance logic that prevents two aircraft from occupying the same space is software. Congress has already agreed, repeatedly, that digital systems become infrastructure when they are essential to the functioning of society. The Infrastructure Investment and Jobs Act of 2021 allocated 65 billion dollars for broadband as critical national infrastructure. The question is not whether AI governance qualifies as infrastructure but whether the federal government builds it or leaves the infrastructure of cognition entirely in private hands.

6. The Choice

Hinton’s probability is real, the safety departures are real, and the Economic Override Pattern is documented across every major frontier lab. The Anthropic supply chain designation and the Pentagon’s demand for unrestricted surveillance capability are both matters of public record. So is the same-day pivot from Anthropic’s ban to OpenAI’s contract. These are documented events from the past 18 months, and they point in one direction.

Two paths diverge from this evidence.

The first path is the one governments follow by default: classify, surveil, seize, and compel. Force access to models through economic pressure. Punish companies that hold ethical positions. Reward companies that find flexible ways to comply. Consolidate control in the hands of whoever says yes fastest. This path treats AI as contraband to be controlled rather than infrastructure to be governed. It reinforces the same concentrated, profit driven, safety-adverse structure that created the risk in the first place, and it leaves the public exactly where it started: dependent on institutions that have already shown they will choose power over accountability.

The second path keeps the market open and puts governance in the hands of the people who live with the consequences. Public, non-cognitive governance infrastructure that no single company owns and no single agency controls. Multiple providers competing on safety that is auditable, not safety that is promised. Human checkpoints at every decision gate where the stakes are real. This is not government control of AI. This is government infrastructure that enables an open market to compete under enforceable rules, the same principle that built air traffic control, securities regulation, and the highway system.

The tools to build this path exist today. GOPEL and AI Provider Plurality are not the only answers, but they are existence proofs that the binary choice between corporate self-regulation and state surveillance is false. The question is whether enough people demand the third option before the surveillance reflex hardens into permanent architecture.

That demand starts with recognition. Every time a safety team is dissolved, every time a researcher forfeits equity to issue a public warning, every time a company removes the word “safely” from its mission statement, that is the Economic Override Pattern in operation. Citizens who can name the pattern can demand structural accountability rather than accepting corporate assurances. Institutions that require multi-provider architectures for any AI system touching employment, credit, healthcare, law enforcement, education, or national security decisions stop concentrating risk in a single model from a single company with a single set of incentives. Plurality is a safety property, and requiring it costs nothing in capability while gaining everything in resilience. Policymakers can begin with Phase 0 of the AI Provider Plurality framework, which requires no new appropriation, and generate the baseline data on cross-provider disagreement rates that builds the empirical case for infrastructure investment.

The founders answered the question of concentrated authority for every previous form of power, and the answer was always the same: distribute power, require transparency, keep the people in command. The title of this article says “if We Do Not Act.” Not if government fails to act. Not if industry fails to self-correct. If we, the people who use these systems, fund these companies, elect these officials, and live with these consequences, do not act. The infrastructure is not finished, but it is started. The question is whether we build it before the window closes.


Frequently Asked Questions

Why would the U.S. government seize AI platforms and data centers?

When government classifies frontier AI as national security infrastructure, the default response is to force access through economic pressure, surveillance mandates, and supply chain designations. The February 2026 Anthropic dispute shows this pattern already in motion. Without public governance infrastructure as an alternative, seizure and coercive control become the path of least resistance for any administration facing catastrophic risk estimates from credible insiders.

What happened between the Pentagon and Anthropic in February 2026?

The Pentagon demanded that Anthropic allow unrestricted military use of its Claude AI model. Anthropic refused to permit mass domestic surveillance of Americans or fully autonomous weapons without human oversight. Defense Secretary Hegseth designated Anthropic a supply chain risk to national security, a classification never before applied to an American company. President Trump directed all federal agencies to stop using Anthropic’s technology. Hours later, OpenAI signed a contract for the same classified systems.

What is the Economic Override Pattern?

The Economic Override Pattern describes how corporate incentives systematically prioritize capability advancement over safety validation. Profit maximization, competitive pressure, and shareholder returns create predictable governance failures when mandatory accountability structures are absent. The pattern is documented across frontier AI labs through repeated safety team dissolutions, researcher departures, and mission statement revisions that remove safety language.

What is GOPEL?

GOPEL, the Governance Orchestrator Policy Enforcement Layer, is a non-cognitive governance agent that performs seven deterministic operations: dispatch, collect, route, log, pause, hash, and report. Because the agent performs zero cognitive work, no AI system can manipulate it. The reference implementation is published on GitHub under Creative Commons licensing with full test coverage.

What is AI Provider Plurality?

AI Provider Plurality is a federal policy proposal that requires critical AI decisions to be routed through multiple independent providers rather than a single company. The Congressional Package includes a legislative framework, technical specification, and a Phase 0 implementation that requires no new appropriation. The principle is resilience engineering: when multiple providers compete under a common governance interface, concentration risk drops and safety becomes auditable rather than claimed.

How is this different from government control of AI?

The article argues for government infrastructure, not government control. The distinction mirrors existing models: the FAA does not design aircraft but requires flight data recorders. The SEC does not direct investments but requires audit trails. Public governance infrastructure keeps the market open and providers competing under enforceable rules, rather than concentrating control in either a handful of companies or a handful of agencies.

What can citizens do about AI governance?

Citizens can start by recognizing the Economic Override Pattern when it appears: safety teams dissolved, researchers forced to forfeit equity to speak publicly, mission statements quietly revised to remove safety commitments. Naming the pattern accurately is the first step toward demanding structural accountability. Institutions can require multi-provider architectures for any AI system touching consequential decisions. Policymakers can begin with Phase 0 of the AI Provider Plurality framework at zero additional cost.

Who is Geoffrey Hinton and why does his probability estimate matter?

Geoffrey Hinton is the 2024 Nobel laureate in Physics and a founding figure in deep learning research. He places the probability of AI driven human extinction at 10 to 20 percent within roughly 30 years. This estimate, repeated consistently across major interviews and conferences, provides institutional justification for national security escalation that exceeds the threshold applied to every prior strategic technology.


Sources

Hinton, G. (2024, December 27). Interview. BBC Radio 4. Cited in The Guardian: “Godfather of AI shortens odds of technology wiping out humanity over next 30 years.” https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years

Hinton, G. (2025, June 17). Interview. CNBC: “AI Godfather Geoffrey Hinton: There’s a chance that AI could displace humans.” https://www.cnbc.com/2025/06/17/ai-godfather-geoffrey-hinton-theres-a-chance-that-ai-could-displace-humans.html

Hinton, G. (2025, August). Ai4 Conference. Reported across Observer, ZME Science, Common Dreams. Superintelligence timeline: five to twenty years.

Hinton, G. (2023, May 2). Departure from Google. The Guardian: “Godfather of AI Geoffrey Hinton quits Google and warns over dangers of machine learning.” https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning

NobelPrize.org. (2024). Press release: The Nobel Prize in Physics 2024. https://www.nobelprize.org/prizes/physics/2024/press-release/

AI Impacts. (2022). Expert survey on progress in AI. https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

IMD Business School. (2025). AI Safety Clock. https://www.imd.org/centers/digital-ai-transformation-center/aisafetyclock/

Executive Order 14148. (2025, January 20). Initial Rescissions of Harmful Executive Orders and Actions. The White House. Revokes Executive Order 14110.

Executive Order 14179. (2025, January 23). Removing Barriers to American Leadership in Artificial Intelligence. The White House. https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/

The White House. (2025, July). America’s AI Action Plan. https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf

National Security Memorandum on Advancing U.S. Leadership in Artificial Intelligence. (2024). The American Presidency Project, UC Santa Barbara. https://www.presidency.ucsb.edu/documents/national-security-memorandum-advancing-the-united-states-leadership-artificial

Director of National Intelligence. (2025). Annual Threat Assessment of the U.S. Intelligence Community. https://www.dni.gov/index.php/newsroom/reports-publications/reports-publications-2025/4058-2025-annual-threat-assessment

National Security Commission on Artificial Intelligence. (2021). Final Report. https://reports.nscai.gov/final-report/table-of-contents/

NIST. (2023). AI Risk Management Framework. https://www.nist.gov/itl/ai-risk-management-framework

Leike, J. (2024, May). Resignation statement. Cited in Fortune: “Top OpenAI researcher resigns, saying company prioritized shiny products over safety.” https://fortune.com/2024/05/17/openai-researcher-resigns-safety/

Fortune. (2024, May 21). “OpenAI promised 20% of its computing power to combat AI risks. It never fulfilled that pledge.” https://fortune.com/2024/05/21/openai-superalignment-20-compute-commitment-never-fulfilled-sutskever-leike-altman-brockman-murati/

Brundage, M. (2024, October). Departure statement: “Neither OpenAI nor any other frontier lab is ready.”

Kokotajlo, D. (2024). Congressional testimony. Center for AI Policy reporting. Equity forfeiture reported in Time. https://time.com/6985866/openai-whistleblowers-interview-google-deepmind/

Platformer. (2026). “Exclusive: OpenAI disbanded its mission alignment team.” https://www.platformer.news/openai-mission-alignment-team-joshua-achiam/

Fortune. (2026, February 23). “OpenAI changed its mission statement 6 times in 9 years, removing AI that ‘safely benefits humanity.'” https://fortune.com/2026/02/23/openai-mission-statement-changed-restructuring-forprofit-business/

Sharma, M. (2026, February 9). Resignation letter from Anthropic. Forbes, Yahoo Finance, CNN. https://www.forbes.com/sites/conormurray/2026/02/09/anthropic-ai-safety-researcher-warns-of-world-in-peril-in-resignation/

Morningstar/MarketWatch. (2026, February 12). “Senior AI staffers keep quitting and are issuing warnings about what’s going on at their companies.” https://www.morningstar.com/news/marketwatch/20260212242/senior-ai-staffers-keep-quitting-and-are-issuing-warnings-about-whats-going-on-at-their-companies

Pentagon-Anthropic dispute. (2026, February 24-28). Supply chain risk designation, Trump directive, OpenAI contract. Axios: https://www.axios.com/2026/02/27/anthropic-pentagon-supply-chain-risk-claude | Axios (deadline): https://www.axios.com/2026/02/24/anthropic-pentagon-claude-hegseth-dario | Reuters: https://www.reuters.com/world/us/trump-says-he-is-directing-federal-agencies-cease-use-anthropic-technology-2026-02-27/ | AP News: https://apnews.com/article/b72d1894bc842d9acf026df3867bee8a | CBS News: https://www.cbsnews.com/news/hegseth-declares-anthropic-supply-chain-risk/ | TechCrunch: https://techcrunch.com/2026/02/27/pentagon-moves-to-designate-anthropic-as-a-supply-chain-risk/ | Fortune: https://fortune.com/2026/02/28/openai-pentagon-deal-anthropic-designated-supply-chain-risk-unprecedented-action-damage-its-growth/

OpenAI Pentagon contract. (2026, February 28). Reuters: “OpenAI details layered protections in US defense department pact.” https://www.reuters.com/business/media-telecom/openai-details-layered-protections-us-defense-department-pact-2026-02-28/

EY. (2025, June 4). “EY survey: AI adoption outpaces governance as risk awareness among the C-suite remains low.” Global cross-industry release. https://www.ey.com/en_gl/newsroom/2025/06/ey-survey-ai-adoption-outpaces-governance-as-risk-awareness-among-the-c-suite-remains-low

Atomic Energy Act. (1946). Pub. L. 79-585.

Puglisi, B. C. (2025). Governing AI: When Capability Exceeds Control. ISBN 9798349677687. Chapter 2: The Economic Override Pattern.

Puglisi, B. C. (2026, February). AI Provider Plurality: A Congressional Package. GitHub: https://github.com/basilpuglisi/Public-Policy | SSRN Abstract ID 6195238: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6195238

Puglisi, B. C. (2026, February). Ethics for Oversight and Protection: The Constitutional Case for AI Governance Infrastructure. SSRN Abstract ID 6195278: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6195278

Puglisi, B. C. (2026, February). GOPEL Reference Implementation. GitHub: https://github.com/basilpuglisi/HAIA | Creative Commons Attribution-NonCommercial 4.0.

Brookings Institution. (2025, November). AI infrastructure as 21st century railroad parallel.

Infrastructure Investment and Jobs Act. (2021). $65 billion broadband allocation.

Share this:

  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Facebook (Opens in new window) Facebook
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Business, Code & Technical Builds, Mobile & Technology, Policy & Research, Thought Leadership Tagged With: AI Governance, AI Infrastructure, AI Policy, AI provider plurality, AI Regulation, AI safety, Anthropic, Checkpoint-Based Governance, Economic Override Pattern, Federal Policy, Frontier AI, Geoffrey Hinton, GOPEL, Human-AI Collaboration, National Security, openai, Pentagon, Public Infrastructure, Supply Chain Risk, Surveillance

Reader Interactions

Leave a Reply Cancel reply

You must be logged in to post a comment.

Primary Sidebar

Buy the eBook on Amazon

FREE WHITE PAPER, MULTI-AI

A comprehensive multi AI governance framework that establishes human authority, checkpoint oversight, measurable intelligence scoring, and operational guidance for responsible AI collaboration at scale.

SAVE 25% on Governing AI, get it Publisher Direct

Save 25% on Digital Factics X, Publisher Direct

Digital Factics X

For Small Business

Facebook Groups: Build a Local Community Following Without Advertising Spend

Turn Google Reviews Smarter to Win New Customers

Save Time with AI: Let It Write Your FAQ Page Draft

Let AI Handle Your Google Profile Updates

How to Send One Customer Email That Doesn’t Get Ignored

Keep Your Google Listing Safe from Sneaky Changes

#SMAC #SocialMediaWeek

Basil Social Media Week

Legacy Print:

Digital Factics: Twitter

Digital Ethos Holiday Networking

Basil Speaking for Digital Ethos
RSS Search

@BasilPuglisi Copyright 2008, Factics™ BasilPuglisi.com, Content & Strategy, Powered by Factics & AI,

%d