Infrastructure for Safe AI in America
The Problem
A small number of corporations now control the infrastructure of intelligence in America. AI shapes decisions in finance, healthcare, education, and national security. Every one of these systems hallucinates, confabulates, and produces bias. Different AI platforms produce different outputs on the same inputs, which means no single system can safely serve as an oracle for public decisions. Geoffrey Hinton, who helped build the technology, warns that capability is advancing faster than human ability to control it.
Executive Order 14179 (January 20, 2025) calls for removing barriers to American AI leadership. Executive Order 14365 (December 11, 2025) establishes a national policy framework for artificial intelligence, rejecting a patchwork of fifty state regulatory regimes in favor of a unified national standard. Today there is no federal infrastructure that makes such a standard operational across agencies and providers.
The American public does not need more AI regulation. The American public needs AI infrastructure.
This infrastructure does not regulate speech. It regulates auditability for consequential decision pipelines.
Why Infrastructure, Not Regulation

When aviation became critical to commerce and safety, the federal government did not tell airlines how to fly. The Federal Aviation Act of 1958 created the FAA as independent infrastructure authority. Congress did not regulate flight paths. It built the system that makes safe flight possible: air traffic control, flight data recorders, safety certification, accident investigation. The government built the road. The airlines were the vehicles.
When finance became critical to the economy, the government did not tell banks how to invest. The Securities Exchange Act of 1934 established mandatory disclosure infrastructure. Congress required audit trails, not investment strategies: disclosure requirements, market surveillance, investor protection. The government built the road. The banks were the vehicles.
When telecommunications became critical to daily life, the government did not tell phone companies what to say. The Communications Act of 1934 mandated interoperability. Congress ensured networks could connect, not what they would carry: spectrum allocation, universal service, consumer protection. The government built the road. The carriers were the vehicles.
AI is now critical to everything. The pattern is the same. The government does not need to tell AI companies how to think. It needs to build the infrastructure that makes AI auditable, accountable, and interoperable. The government builds the road. AI platforms are the vehicles.
This is not a proposal for more regulation. This is the engineering that makes less regulation safe.
What Is GOPEL
GOPEL (Governance Orchestrator Policy Enforcement Layer) is the proposed infrastructure. It is a non-cognitive governance agent that performs seven deterministic operations: dispatch, collect, route, log, pause, hash, and report. Zero cognitive work. Zero judgment. Zero content evaluation.
GOPEL does not evaluate AI outputs. It does not rank them. It does not filter them. It moves data between platforms and logs everything. It is a pipe with a logbook and a tamper-evident seal.
The architecture cannot be co-opted because there is nothing to co-opt. If AI capability advances to the point of influencing its operators, a non-cognitive layer has no cognition to manipulate. It removes a class of cognitive manipulation risk and leaves the remaining risks in transport, identity, and access control where deterministic security controls and auditing can verify integrity. The security principle is straightforward: replace manual labor, never human authority.
GOPEL is inspectable by design. Every operation is deterministic. Given the same inputs, it produces identical outputs. The specification is public. The code can be audited. The logs are platform-independent text files any AI system can read. Unlike the AI platforms it governs, GOPEL has no weights, no training data, no emergent behaviors. It is closer to a traffic signal than a self-driving car.
Implementation would be housed under a small interagency program office with NIST technical profiles and GSA procurement integration.

“Infrastructure Is Physical. Code Isn’t Infrastructure.”
This is the most common objection, and it misunderstands what infrastructure has already become.
The interstate highway system is infrastructure. So is the traffic management software that controls every signal, ramp meter, and variable speed limit sign on it. The physical road is useless without the control systems.
The electrical grid is infrastructure. So is SCADA, the software that governs power distribution across every utility in America. Without SCADA, the physical grid is a collection of disconnected wires.
Air traffic control is infrastructure. The radar towers are physical. The system that prevents two aircraft from occupying the same space is software. No one argues that air traffic control is not infrastructure because the collision avoidance logic runs on code.
The financial clearing system is infrastructure. When you swipe a credit card, the physical card reader is the least important component. Fedwire, ACH, and SWIFT are software systems that move trillions of dollars daily. Congress regulates them as critical infrastructure.
Congress has already agreed. The Infrastructure Investment and Jobs Act of 2021 allocated $65 billion for broadband as critical national infrastructure. Federal policy already treats cybersecurity as critical infrastructure risk across multiple statutory authorities. Digital systems become infrastructure when they are essential to the functioning of society.
AI systems now consume physical infrastructure at industrial scale. Training large language models can consume significant volumes of water for cooling, substantial electrical output, and compute resources that depend on semiconductor fabrication plants spanning three continents. Water and energy consumption vary by model size, data center location, and cooling method, but the scale is industrial by any measure. When the production of cognition requires water, power, and rare earth minerals at this scale, the governance of that cognition is infrastructure governance.
The question is not whether AI governance is infrastructure. The question is whether the federal government builds it, or whether it leaves the infrastructure of cognition entirely in private hands with no structural accountability.
The founders answered that question for every previous form of concentrated authority. The answer was always the same: distribute power, require transparency, keep humans in command.
The Package
Found on GitHub and SSRN. PDF available here.
This Congressional package contains four documents. Recommended read order: Document 1 first, then Document 3 for the fiscal and enforcement mechanism, then Document 2 for the constitutional justification, then Document 4 for technical staff.
Document 1: One-Pager — The elevator pitch. Three legislative actions, the infrastructure precedent, the bipartisan case, and the cost structure. Two pages. Start here.
Document 2: Policy Brief — The constitutional and philosophical case for AI governance infrastructure. Why checks and balances must extend to algorithms. Why concentration of cognitive infrastructure is a democratic threat. Why GOPEL is the answer.
Document 3: Legislative Framework — The policy mechanism. Federal coordination structure, phased appropriations, funding sources, procurement standards, provider plurality requirements, and a five-phase implementation timeline from Phase 0 (no new appropriation) through full-scale deployment.
Document 4: Technical Appendix — The GOPEL infrastructure specification and HAIA-RECCLIN operational model. Seven deterministic operations, three operating models, checkpoint-based governance gate report, automation bias detection thresholds, security architecture, and regulatory compliance mapping across NIST AI RMF, EU AI Act, and ISO 42001.
Three Things This Package Asks Congress To Do
- Fund GOPEL as national AI infrastructure. The government builds the road. AI platforms are the vehicles. Phase 0 requires no new appropriation and uses existing agency licenses and staff capacity.
- Mandate API accessibility for AI companies participating in federal procurement and operating in defined high-consequence decision pipelines. Vehicles must meet safety standards to use public roads. AI platforms must be queryable by governance infrastructure. A voluntary compliance pathway is available for the broader market.
- Invest in small AI platforms through SBIR/STTR mechanisms. Governance requires provider diversity. Investment in small AI companies creates the competitive supply that makes governance real.
Evidence Discipline
This package maintains a three-tier evidence standard:
Tier 1: Proven by others. AI hallucination, confabulation, bias, alignment failure, and single-point-of-failure risk are documented across peer-reviewed literature, NIST frameworks, and regulatory findings.
Tier 2: Built and operated as working concepts. HAIA-RECCLIN multi-AI governance under human arbitration has been operated across several hundred articles and a published book (2022 through 2025). Cross-platform disagreement has been observed in approximately 15 to 25 percent of tasks in working-concept implementations. Disagreement means materially different factual claims, recommended actions, or citations on the same prompt, not stylistic variance. Separately, cross-platform consistency measured by intraclass correlation coefficient (ICC) reached 0.96 across five platforms and four dimensions, indicating high convergence quality when platforms do agree. These are different metrics: disagreement rate indicates how often platforms diverge; consistency score indicates convergence quality when agreement occurs. Both are single-practitioner operational observations, not validated federal benchmarks. Federal pilots are required to establish authoritative ranges.
Tier 3: Proposed for federal development. GOPEL as autonomous infrastructure, provider plurality engines at national scale, and standardized automation bias detection thresholds require federal investment, pilot programs, and validation before deployment claims can be made.
This Is a Pioneer Path
None of the observations in this package originated here. AI hallucination is documented. Provider plurality is already practiced in cybersecurity and financial auditing. Infrastructure governance is the foundation of every regulated industry. Automation bias is studied in aviation, healthcare, and national security.
What originated here is the combination: the argument that these established principles, assembled into a coherent infrastructure, can govern AI at national scale while preserving American free markets and meeting the highest international compliance standards.
This is a pioneer path, not a finished product. The country needs to start.
Frequently Asked Questions
“Doesn’t this slow down American AI innovation?”
GOPEL adds zero cognitive overhead to AI platforms. It is a pipe, not a filter. The FAA does not tell Boeing how to design wings. It requires flight data recorders. American aviation leads the world because of this infrastructure, not despite it.
“China isn’t doing this. Why should we tie our hands?”
China is building state-controlled AI. We are building democratically accountable AI. These are different systems with different strengths. If we do nothing, we get corporate concentration that is structurally similar to state control: one point of failure, one point of capture. Infrastructure preserves American competitive advantage through diversification, not restriction.
“This helps Big Tech by raising barriers to entry for startups.”
The opposite is true. Without public infrastructure, incumbents become the infrastructure. They control the APIs, the audit trails, the interoperability standards. GOPEL is public infrastructure. Any platform, any size, can connect. The SBIR/STTR investment component explicitly funds small platforms. The mandate is API accessibility, not certification complexity. This lowers barriers by preventing walled gardens.
“Agencies can handle AI governance themselves. Why build new infrastructure?”
Agencies are handling it themselves, separately. Each agency procures AI with separate contracts, separate audit standards, separate vendor lock-in. The result is dozens of incompatible governance systems, not one national standard. Executive Order 14365 explicitly rejects this patchwork. GOPEL is the infrastructure that makes a unified standard operational.
“The federal government can’t even build a website. Why trust it with AI infrastructure?”
GOPEL is not a consumer-facing website. It is backend infrastructure: logging, hashing, API routing. The government already operates comparable systems. Fedwire moves trillions of dollars daily. SCADA controls the electrical grid. Classified networks run secure multi-party computation. The question is not whether government can build software. It is whether critical infrastructure remains entirely in private hands with no public accountability layer.
“Non-cognitive governance is impossible. Someone has to evaluate outputs.”
GOPEL does not evaluate outputs. Humans evaluate outputs. GOPEL ensures humans see disagreement when it occurs. The non-cognitive design means GOPEL has no model, no weights, no training data. Nothing to manipulate. The evaluation happens at the human checkpoint, not in the infrastructure layer. This is the security architecture: separate the mechanism that moves data from the mechanism that judges it.
“API mandates will expose trade secrets.”
The mandate requires queryability, not model weights. It is the difference between requiring a car to have a diagnostic port and requiring the engine blueprints. Platforms respond to structured prompts and return structured outputs, exactly what they already do for millions of API customers. The trade secret objection was raised against the SEC’s disclosure requirements and the FAA’s black box mandates. In both cases, infrastructure accountability and commercial innovation proved compatible.
“Multi-AI workflows multiply cost and latency.”
Cost and latency are measurable. So is error detection. In documented working-concept operations, platforms have produced materially different outputs on identical prompts in 15 to 25 percent of tasks. Governance processes flagged these disagreements, triggered human verification, and prevented error propagation. Federal pilots will establish cost-benefit ratios. Phase 0 requires no new appropriation. Agencies can generate baseline data before any infrastructure investment.
“If AI becomes superintelligent, infrastructure won’t stop it.”
Correct. Infrastructure does not stop superintelligence. It stops corporate concentration, automation bias, and single-point-of-failure risk: documented, addressable problems today. If capability advances to the point Hinton describes, a non-cognitive governance layer has properties that cognitive layers lack: no cognition to manipulate, no judgment to influence. We build for the risks we can address, not the risks we can only speculate about.
“This will never pass. Tech lobbyists will kill it.”
Tech lobbyists killed the FAA in 1958? They killed the SEC in 1934? Infrastructure eventually wins because crises eventually occur. The question is whether Congress acts before the crisis or after. Phase 0 requires no legislation. Agencies can begin immediately under existing authority. When the first major AI failure in a federal system occurs, the infrastructure proposal will be ready.
Resources
- Full package on GitHub: github.com/basilpuglisi/Public-Policy
- Technical framework (HAIA-RECCLIN/GOPEL): github.com/basilpuglisi/HAIA
- SSRN working papers: Document 4 on SSRN | Document 2 on SSRN
- Book: Governing AI: When Capability Exceeds Control (2025, ISBN 9798349677687)
- Origin: The Case for AI Provider Plurality in Evidence-Based Research (October 30, 2025)
Contact: me@basilpuglisi.com
To discuss implementation, draft legislative language, or schedule a briefing.