AI work leaves plenty of trace. The problem is that those traces are scattered across platforms, organized around conversation flow, and not structured around the questions an audit actually asks. CARCS closes that gap with a ten-section governed record built from a three-part prompt suite. It works on any AI platform. Named human sign-off is required before finalization. This working paper releases the protocol for feedback and collaboration from governance practitioners, compliance officers, and researchers.
White Papers
AI Governance Beyond the Warning: From Tristan Harris’s Diagnosis to the Infrastructure It Requires
A Governance Practitioner’s Response to the Diary of a CEO Interview (PDF Here) Executive Summary Tristan Harris’s November 2025 conversation on The Diary of a CEO reached millions of viewers with a structural diagnosis of the AI race: the same incentive architecture that produced social media’s damage to democracy and mental health is now operating […]
Enterprise AI ROI: What Seven Landmark Reports Found, What They Missed, and Five Decisions Worth Making Now
Type: Research Synthesis | Executive White Paper Period Covered: 2025–2026 Primary Sources: Accenture (2025) | Deloitte AI ROI Survey (Oct. 2025) | Deloitte State of AI in the Enterprise (Jan. 2026) | Google Cloud ROI of AI (2025) | McKinsey State of AI (Nov. 2025) | Microsoft Becoming a Frontier Firm (2025) | OpenAI State […]
Human Drift and Hallucination: The Data Literacy Crisis Hiding Behind the AI One
The technology industry has spent three years warning the world about AI hallucination, the phenomenon where artificial intelligence fabricates facts, invents citations, and generates confident nonsense. That warning is valid, and AI hallucination is real, documented, and dangerous when undetected. But it is not the most dangerous data problem in public discourse right now. The […]
HAIA-RECCLIN: Reasoning and Dispatch
Third Edition for Human AI Governance Get the PDF Here Executive Summary HAIA-RECCLIN is an operational methodology for governing AI output through structured human oversight. It comprises two capabilities: Reasoning, a ten-field output format that forces any AI platform to show its work, cite its sources, score its own confidence, flag its own conflicts, and […]
HAIA: Human Artificial Intelligence Assistant
The Name Given to the Ecosystem for Human-AI Collaboration (PDF) What It Is, Why It Exists, Where It Comes From Executive Summary HAIA stands for Human Artificial Intelligence Assistant. It is the ecosystem that structures a human’s interaction with AI, specifically with large language models, across every stage of collaboration: how the AI is instructed, […]
Checkpoint-Based Governance (CBG): A Constitutional Framework for Human-AI Collaboration
The Four Constitutional Properties Property 1Primary Purpose CBG is AI Governance. It provides human oversight and accountability for AI-assisted work. CBG’s primary purpose is to supply the governance layer that sits on top of single-platform AI output and that makes RECCLIN dispatch and CAIPR parallel review into governed learning systems rather than AI frameworks alone. […]
GOPEL v1.5: The Non-Cognitive Governance Layer That Automates Without Thinking
What GOPEL Is GOPEL — Governance Orchestrator Policy Enforcement Layer — is the only published, fully disclosed reference implementation of a non-cognitive multi-AI governance architecture anywhere in the world. That claim carries weight because the search for something like it came up empty. In 2025, during the build of the HAIA-RECCLIN governance framework, the need […]
What 34 Reports Actually Told Us About AI: The Truth Behind the Hype, the Proof, and the Path Forward
A synthesis of research from McKinsey, Google, OpenAI, Anthropic, BCG, IBM, Microsoft, WEF, Deloitte, OECD, the Future of Life Institute, and more, compiled and critiqued by a practitioner. The Setup: Why This Matters More Than Another Hot Take Alex Issakova curated and shared a collection of 34 leading AI research reports from the world’s most […]
The Loop That Ate the Governor
When “Human in the Loop” Becomes “Human Lost in the Queue” A Case Study in Governance Architecture Failure The Argument Every major AI governance framework in circulation today includes some version of the same assurance: a human remains in the loop. The EU AI Act requires it in Article 14. The NIST AI Risk Management […]









