Every framework on this page was built to solve a specific problem. AI produces output without accountability. Organizations deploy AI without measurement. Platforms disagree and nobody knows what to do with the disagreement. Humans lose authority over systems that were supposed to serve them.
These are not theoretical concerns. They are operational failures happening in every industry right now. The tools below address them.
The governing system is HAIA — Human AI Assistant. The foundational discipline that makes HAIA possible is Factics, built in 2012 before AI entered the workflow. Every framework on this page sits within or alongside that architecture.
For full ethical disclosure, content labeling methodology, and legal statements governing how AI is used on this platform, visit Content Disclosure & Ethics of AI.
To learn more about the person behind the work, visit About @BasilPuglisi. To read the book that documents the full methodology, visit Governing AI: When Capability Exceeds Control.
The Grammar That Reveals the Architecture
Before the frameworks, the distinction that matters most:
Ethical AI answers what do we value? It sets the boundaries we refuse to cross.
Responsible AI answers how do we enforce those values? Technical guardrails, bias testing, compliance checks. AI validates AI.
AI Governance answers who decides when the system fails? Human authority at every decision point.
The grammar reveals the architecture of control. In Ethical AI and Responsible AI, the AI gets final position. In AI Governance, humans get final position. All three are necessary. Only the third provides structural accountability.
The frameworks below operate in that third tier.
The Three-Pillar Structure
The complete body of work rests on three pillars. Two stand independently. One is the center.
Factics (2012) is the pre-AI foundation. No platform required. Every fact must be paired with a tactic and every tactic must produce a measurable outcome. Factics is not a component of HAIA. It is the condition that made HAIA possible, and it is the feedback terminus the full system closes back to when measurement reveals growth stalling.
CBG — Checkpoint-Based Governance (origin in practice: 2023 | published: 2025) is the human constitutional authority layer. CBG governs the human arbiter at every binding decision point. It sits outside HAIA because its subject is the human, not the AI. A practitioner operating HAIA without CBG is in Responsible AI mode. A practitioner combining HAIA with CBG is in AI Governance mode.
HAIA — Human AI Assistant (published: 2025) is the center. Every framework that governs AI execution carries the HAIA name or operates directly within the HAIA system.
The Adoption Ladder
Nobody starts at the infrastructure layer. The system is designed for progressive adoption. Factics precedes the ladder. CBG runs parallel to every rung.
| Rung | What Is Active | What It Delivers |
|---|---|---|
| Pre-HAIA | Factics alone | Evidentiary discipline. No AI required. |
| 1 | RECCLIN Reasoning | Structured AI output. Single platform. Free tier accessible. |
| 2 | RECCLIN Dispatch | Multi-AI role assignment in series. Evidence-based role assignment. |
| 3 | HAIA-CAIPR | Parallel multi-AI orchestration. Convergence analysis. Hallucination detection. |
| 4 | Full stack plus HAIA-GOPEL | Governed communication channel. Cryptographic audit trail. Federal deployment readiness. |
HEQ runs parallel to every rung, measuring whether practice is producing genuine human growth.
The Frameworks
Factics (Facts + Tactics + KPIs)
Created: 2012 | Status: Operational since inception
Factics is the foundational methodology underlying everything on this page. Every fact must lead to a tactic, and every tactic must leave measurable evidence. No claims without verifiable sources. No tactics without measurable outcomes.
Factics was introduced at NYXPO Javits Center in 2012 and refined across 900+ published articles over 16 years. HAIA-RECCLIN, CBG, and HEQ are extensions designed to apply Factics principles to AI collaboration challenges.
→ Applied throughout: Governing AI | Digital Factics X
HAIA-RECCLIN (Human AI Assistant — Researcher, Editor, Coder, Calculator, Liaison, Ideator, Navigator)
Published: 2025 | Status: Operational, validated across 11 platforms | Current Version: 2026 Edition
HAIA-RECCLIN is the governing framework for structured human-AI collaboration. It converts AI interaction from ad hoc prompting into structured multi-role collaboration and operates through two distinct functions: RECCLIN Reasoning and RECCLIN Dispatch.
RECCLIN Reasoning — The Original Insight
RECCLIN Reasoning began in practice as a manual two-step sequence: send the task prompt, receive the AI’s answer, then send a second prompt demanding the facts, tactics, KPI, and sources behind it. That follow-up was sent after every output, manually, before it was consolidated into a single opening instruction. That consolidation eliminated the risk of the AI producing ungoverned output by default.
RECCLIN Reasoning governs what the AI produces and how it presents that production. Every output carries eight defined elements: Role, Task, Output, Sources, Conflicts, Expiry, Factics, and Recommendation. It runs in every single-platform interaction and in every multi-AI workflow. It is the show-your-work standard that makes AI output legible and governable.
RECCLIN Dispatch — Multi-AI by Governance Response
In 2023, ChatGPT produced strong answers but failed to provide sources reliably. Factics requires sources. The solution was to bring Perplexity in for source validation, giving it ChatGPT’s answer alongside a Factics source-validation prompt requiring fact, tactic, and KPI for every citation. Errors went back to ChatGPT.
That two-platform series loop — one platform for the answer, one platform for source validation, errors routed back — is the first instance of RECCLIN Dispatch. Multi-AI governance was not designed. It was a governance response to a single-platform failure. As more platforms became available and structured Reasoning outputs revealed distinct strengths across tasks, role assignment expanded and became evidence-based.
RECCLIN Dispatch remains the right model for practitioners operating with limited resources or free platforms. One platform per role, working in series, with every output in the series showing its work under the full Factics accountability chain.
The Seven Roles
- Researcher: Locates, retrieves, and validates information from primary and secondary sources
- Editor: Synthesizes information, preserves conflicting interpretations, refines for publication
- Coder: Implements technical solutions with review gates
- Calculator: Quantifies outcomes with methodology transparency
- Liaison: Coordinates across organizational boundaries with stakeholder accountability
- Ideator: Generates alternatives with documented reasoning
- Navigator: Guides overall direction, documents conflicts and dissent systematically
Operating Modes
HAIA-RECCLIN operates in two modes. Responsible AI mode runs RECCLIN alone. AI consensus is valid. Dissent routes to other AI platforms and is noted in the output. AI Governance mode combines RECCLIN with CBG. Human arbitration is absolute. A single dissenting platform can be sufficient cause for the human arbiter to overturn the majority.
Validation: 204-page manuscript production, 50+ article implementation, five-platform convergence study producing 0.96 ICC cross-platform consistency, eleven-platform EOY 2025 audit.
→ Framework page: HAIA-RECCLIN | → PDF: HAIA-RECCLIN Multi-AI Governance Framework
HAIA-CAIPR (Cross AI Platform Review, pronounced “kay-per”)
Named: March 2026 | Status: Operational | Specification: v1.1
HAIA-CAIPR is the governance protocol for human orchestration of parallel multi-AI execution. It is the counterpart to RECCLIN Dispatch at scale. Where Dispatch assigns roles in series, CAIPR governs multiple platforms running simultaneously with no cross-platform visibility before output is produced.
CAIPR was not designed in advance. It emerged through eight months of daily operational practice across eleven AI platforms and was formally named and specified in March 2026, documented in Case Study 006.
CAIPR governs six core operations: parallel dispatch, collection, convergence analysis, hallucination detection, synthesizer oversight, and source-authority discrimination. Source-authority tiers: Tier 0 is the human arbiter, Tier 1 is raw platform output, and Tier 2 is synthesizer output, which carries the highest scrutiny because its failures are structurally invisible to the platforms being synthesized.
RECCLIN Reasoning runs within every CAIPR workflow. RECCLIN governs how each AI responds. CAIPR governs how the human works across multiple AIs simultaneously.
Adoption: Requires paid subscriptions or enterprise access across multiple platforms.
→ Framework page: HAIA-CAIPR | → EU Compliance Edition: Agent Governance Architecture
Checkpoint-Based Governance (CBG)
Origin in practice: 2023 | Published: 2025 | Status: Operational
CBG is the constitutional layer. It governs the human arbiter at every binding decision point in a human-AI workflow. CBG is not an AI framework. Its subject is the human, not the AI.
The core argument: a practitioner using Responsible AI has automation. AI checks AI, odd-number platform counts produce a majority signal, and that majority governs. A practitioner using CBG has governance. The human receives all multi-AI feedback, including the majority signal, and holds constitutional authority to override it — not because the majority was numerically wrong, but because human judgment, lived experience, domain knowledge, or independent research requires it. A single dissenting platform can be sufficient cause for a human arbiter to overturn the majority under CBG. The human does not need a counter-majority to override. The human needs authority. CBG provides that authority structurally.
The Central Rule: AI cannot approve another AI. Every checkpoint requires human judgment with recorded rationale.
CBG accepts efficiency costs in exchange for traceable human responsibility at every binding decision point.
→ Documented in: Governing AI: When Capability Exceeds Control
HEQ (Human Enhancement Quotient)
Published: 2025 | Status: Operational, validated across 9 platforms | Current Paper: v4.3.3
HEQ is the four-dimension instrument that measures whether AI collaboration is making the human better, not just whether the output is higher quality. It produces a composite score of 0 to 100 across four dimensions. Minimum three-platform administration required to prevent single-platform bias.
The four dimensions:
- CAS (Cognitive Adaptive Speed): How quickly and clearly someone processes and connects ideas
- EAI (Ethical Alignment Index): How well thinking reflects fairness, responsibility, and transparency
- CIQ (Collaborative Intelligence Quotient): How effectively someone integrates different perspectives
- AGR (Adaptive Growth Rate): How someone learns from feedback and applies it forward
HEQ closes the loop back to Factics when measurement reveals collaboration quality stalling. When the score plateaus, the signal is that the foundational evidentiary discipline needs reassessment.
HEQ5 (enterprise extension) adds Societal Safety as a fifth dimension for organizational deployment.
Evolution path: HEQ is the current operational instrument. AIS (Augmented Intelligence Score) is the future-state evolution, measuring what the human and AI produce together that neither could produce alone. AIS is not yet the operational standard.
Growth OS (within HEQ)
Growth OS is the organizational theory for how augmented intelligence increases and becomes sustainable over time. HEQ is the measurement instrument that makes that theory empirically verifiable. Three pillars: Trust and Transparency, Rhythm and Culture, Outcome Anchoring. Growth OS describes the conditions. HEQ produces the score that proves whether those conditions are working.
Documented Results:
- Case Study 001: 0.96 ICC across 5 platforms
- EOY 2025 Audit: HEQ composite 91.8 across 9 platforms
- Longitudinal trajectory: 87.5 → 92.3 → 91.8 (Q2 → Q3 → Q4 2025)
- Cross-user pilot (n=10): range 78–94, CIQ lowest in all 10 cases
- 2026 Validation Roadmap: Cronbach’s alpha > 0.75 across n=100+, criterion validity against supervisor ratings at r > 0.35, longitudinal AGR tracking across 90 days
→ Framework page: Measuring Augmented Intelligence: HEQ | → PDF: HEQ Enterprise White Paper v4.3.3
HAIA-CORE (Content Optimization Reader Evaluation)
Status: Operational
HAIA-CORE evaluates the substance of long-form content before publication. It activates as a conditional branch when output is a blog post, article, or document. HAIA-CORE asks whether the opening hook works, whether claims are substantiated, whether structure serves the reader, and whether the piece accomplishes its stated purpose. Each dimension is scored with a paired Factics triad: observed issue, tactic to fix it, measurable improvement expected.
HAIA-CORE evaluates substance. HAIA-SMART governs distribution. The two are distinct and sequential.
→ Article: HAIA-CORE: Evaluate Your Content Before the Algorithm Does
HAIA-SMART (Social Media and Communication Evaluation)
Current Version: v1.5 | Status: Operational
HAIA-SMART evaluates social media and communication content across six pillars before publication. Two optimization paths govern the evaluation: Path A (Algorithmic Optimization) for platform-native discovery, and Path B (Organic Resonance) for trust-driven audience relationship. The practitioner declares the path at the start of assessment. Publication threshold is 24 out of 30.
The six pillars: Hook Quality, Relational Coherence, Perceived Outperformance, Call-to-Action Strength, Semantic Integrity, Predicted Engagement Authenticity.
→ Article: HAIA-SMART v1.5: A Scoring Framework for Content That Stays Human
HAIA-Agent
Status: Operational (Model 3), Reference Specification for Models 1 and 2
HAIA-Agent governs orchestration logistics automation. It automates the mechanics of HAIA orchestration at scale: dispatching prompts, collecting outputs, routing materials, and logging operations. HAIA-Agent performs zero cognitive work by design. A logistics layer that can think can be manipulated.
Three operating models:
- Model 1 — Agent Responsible AI: Full pipeline automation to one final human checkpoint
- Model 2 — Agent AI Governance: Pauses after each RECCLIN role for human approval before proceeding
- Model 3 — Manual Human AI Governance: Human orchestrates manually, agent only logs
All published work and documented case studies were produced under Model 3.
→ Architecture: Agent Governance Architecture: EU Regulatory Compliance Edition | → PDF: EU Agent Specification
HAIA-GOPEL (Governance Orchestrator Policy Enforcement Layer)
Created: 2025–2026 | Status: Working reference implementation (v0.6.1)
HAIA-GOPEL is working governance infrastructure code, not a paper or a proposal. It is the governed communication channel connecting the human arbiter to the AI platforms in both directions. Every prompt dispatched and every output collected travels through GOPEL. The channel is logged, hash-chained, and tamper-evident. GOPEL performs zero cognitive work by design. A communication channel that can think can be manipulated.
Seven deterministic operations:
- Dispatch: Sends identical prompts to selected AI platforms
- Collect: Receives all responses without modification
- Route: Delivers responses to Navigator for synthesis
- Log: Writes structured audit records for every operation
- Pause: Stops at checkpoint gates, delivers governance package to human
- Hash: Computes SHA-256 cryptographic hashes for tamper detection
- Report: Counts approval rates, reversal rates, threshold triggers
Adversarial Validation: Seven independent AI platforms reviewed the codebase. None found everything alone. Every critical vulnerability was fixed and verified. The process proved the provider plurality argument inside the system designed to enforce it.
Test Results: 183 tests across 9 test suites. 14 source modules. Zero non-cognitive constraint violations.
Congressional positioning: Proposed through the AI Provider Plurality Package v9. Phase 0 requires no new appropriation. The infrastructure precedent: FAA governs aviation, FCC governs broadcast, SEC governs financial markets.
→ Article: GOPEL: The Code Behind the Policy | → Code: github.com/basilpuglisi/HAIA/haia_agent | → EU Compliance Edition: Agent Governance Architecture | → PDF: EU Agent Specification
AI Provider Plurality
Status: Four-document Congressional package published on GitHub and SSRN
AI Provider Plurality is the principle that no single AI platform should hold unchecked authority over consequential decisions. The Congressional package proposes this as federal infrastructure and now contains five documents:
- One-Pager: Executive summary for legislative staff
- Policy Brief: Constitutional case for AI governance infrastructure
- Legislative Framework: Proposed statutory language
- Technical Appendix: GOPEL specification and operational evidence
- VAISA (Verified AI Inference Standards Act): Statutory language requiring auditable, multi-platform verification for AI outputs used in consequential federal decisions
The proposal frames governance as infrastructure analogous to the Federal Reserve (monetary systems), FAA (aviation), SEC (financial markets), and FCC (telecommunications). The government does not own the industry. It builds the infrastructure that makes the industry safe, competitive, and accountable.
→ Full package: AI Provider Plurality: A Congressional Package | → Legislative Framework PDF: Document 3 of 5 | → VAISA PDF: Document 5 of 5 | → GitHub: github.com/basilpuglisi/Public-Policy
The Platforms
This work is validated across eleven AI platforms under HAIA-RECCLIN governance: ChatGPT, Claude, Gemini, Grok, Perplexity, Mistral, DeepSeek, Meta AI, CoPilot, Kimi, and MiniMax. No platform holds a permanent primary position. Roles are assigned based on task requirements, not platform identity. Disagreement between platforms functions as diagnostic signal rather than failure.
For the full platform list, role assignments, and tools disclosure, visit Content Disclosure & Ethics of AI.
Documented Results
These are operational outcomes, not theoretical claims.
| Metric | Value | Source |
|---|---|---|
| Cross-platform consistency | 0.96 ICC across 5 platforms | Case Study 001 |
| HEQ composite | 91.8 across 9 platforms | EOY 2025 Audit |
| HEQ trajectory | 87.5 → 92.3 → 91.8 (Q2–Q4 2025) | Longitudinal tracking |
| Cross-user range | 78–94 (n=10 pilot) | Preliminary validation |
| Checkpoint utilization | 96% (28 of 29) | Governing AI book production |
| Dissent documentation | 100% (26 dissents preserved) | Book production |
| Continuity under stress | 100% task completion during provider loss | Production documentation |
| Book ranking | #1 Ethics, Top 5 Generative AI, Top 5 Political Science | Amazon |
| Congressional package | 5 documents published | GitHub, SSRN |
| GOPEL tests passing | 183 across 9 test suites | GitHub repository |
| Adversarial reviews | 7 independent AI platforms | GOPEL validation |
| HEQ paper | v4.3.3 | HEQ Enterprise White Paper |
| Published articles | 900+ over 16 years | basilpuglisi.com |
Published Work
Governing AI: When Capability Exceeds Control — 204-page operational governance guide. #1 Ethics on Amazon. Available in print and ebook.
Digital Factics X — Measurable business growth strategy for X. Available on Amazon.
Digital Factics: Twitter — The original Factics methodology publication. 58 pages. Published November 2012 through Digital Ethos on MagCloud. The first documented application of Facts + Tactics as a structured business methodology.
HEQ Enterprise White Paper v4.3.3 — Enterprise measurement instrument for human-AI collaboration. PDF
AI Provider Plurality: A Congressional Package — Five-document federal infrastructure proposal. Legislative Framework PDF | VAISA PDF
GOPEL: The Code Behind the Policy — Working governance infrastructure code with adversarial validation. GitHub repository
HAIA-RECCLIN Multi-AI Governance Framework — Enterprise multi-AI governance framework. PDF
Agent Governance Architecture: EU Regulatory Compliance Edition — Non-cognitive agent specification for audit-grade multi-AI collaboration. PDF
Reader Guidance
→ Content Disclosure & Ethics of AI — Ethical disclosure, content labeling, and legal statements → AI Thought Leadership — HAIA-RECCLIN in practice → AI Policy — Congressional package and provider plurality → AI Learning — Courses and resources → About @BasilPuglisi — Author and governance philosophy
Certification: Elements of AI, University of Helsinki | Ethics of AI, University of Helsinki
Annual Review: All tools and frameworks on this page are audited each January under HAIA-RECCLIN governance. Last audit: January 2026.
Frequently Asked Questions About AI Governance
Q: What is the difference between Ethical AI, Responsible AI, and AI Governance? A: These are three separate operational tiers, not synonyms. Ethical AI answers what do we value and sets the boundaries we refuse to cross. Responsible AI answers how do we enforce those values through technical guardrails, bias testing, and compliance checks where AI validates AI. AI Governance answers who decides when the system fails by placing human authority at every decision point. The grammar reveals the architecture: in Ethical AI and Responsible AI, the AI gets final position. In AI Governance, humans get final position.
Q: What is HAIA? A: HAIA stands for Human AI Assistant. It is the governing umbrella for all AI-specific frameworks developed by Basil C. Puglisi. Every framework that governs AI execution carries the HAIA name or operates directly within the HAIA system. The full name carries the core argument: the AI serves the human, the human governs the AI, and the relationship is structured rather than ad hoc.
Q: What is HAIA-RECCLIN? A: HAIA-RECCLIN (Human AI Assistant with roles: Researcher, Editor, Coder, Calculator, Liaison, Ideator, Navigator) is a governance framework created by Basil Puglisi that converts AI interaction from ad hoc prompting into structured multi-role collaboration. It operates through two functions: RECCLIN Reasoning, which governs structured output from individual platforms, and RECCLIN Dispatch, which assigns roles across multiple platforms working in series. Every output is auditable and every final decision remains human-led.
Q: What is HAIA-CAIPR? A: HAIA-CAIPR (Cross AI Platform Review, pronounced “kay-per”) is the governance protocol for human orchestration of parallel multi-AI execution. It governs how the human dispatches identical prompts to multiple platforms simultaneously, collects and compares outputs, detects hallucinations through cross-validation, governs the AI synthesizing multiple outputs, and maintains source-authority discrimination throughout. CAIPR was formally named and specified in March 2026 after eight months of operational practice. RECCLIN governs one AI at a time. CAIPR governs the human’s orchestration of many AIs simultaneously.
Q: What is Checkpoint-Based Governance (CBG)? A: CBG is the human constitutional authority layer within the HAIA system. It governs the human arbiter at every binding decision point in a human-AI workflow. CBG is not an AI framework. Its subject is the human. The core rule: AI cannot approve another AI. Human judgment holds authority at every checkpoint and can override any AI majority based on experience, domain knowledge, or independent research without requiring a counter-majority. CBG is documented in Governing AI: When Capability Exceeds Control.
Q: What is HEQ? A: HEQ (Human Enhancement Quotient) is a four-dimension instrument that measures whether AI collaboration is making the human better, not just faster. It measures Cognitive Adaptive Speed (CAS), Ethical Alignment Index (EAI), Collaborative Intelligence Quotient (CIQ), and Adaptive Growth Rate (AGR). Each dimension is scored 0 to 100. HEQ requires a minimum three-platform administration to prevent single-platform bias. HEQ closes the loop back to Factics when measurement reveals growth stalling. The full working paper is at basilpuglisi.com/measuring-augmented-intelligence.
Q: What is Factics? A: Factics (Facts + Tactics + KPIs) is a decision-making methodology created by Basil Puglisi in 2012. Every fact must lead to a tactic, and every tactic must leave measurable evidence. It is the foundational methodology underlying all governance frameworks on this page. HAIA-RECCLIN, CBG, and HEQ are extensions designed to apply Factics principles to AI collaboration challenges. Applied throughout Digital Factics X.
Q: What is HAIA-GOPEL? A: HAIA-GOPEL (Governance Orchestrator Policy Enforcement Layer) is working governance infrastructure code, not a paper or proposal. It is the non-cognitive communication channel connecting the human arbiter to the AI platforms in both directions. It executes seven deterministic operations: dispatch, collect, route, log, pause, hash, and report. Seven independent AI platforms attempted to break GOPEL during adversarial review. None found everything on their own, which proved the provider plurality argument inside the system that enforces it. Read the full article at basilpuglisi.com/gopel-the-code-behind-the-policy and view the code at GitHub.
Q: What is AI Provider Plurality? A: AI Provider Plurality is the principle that no single AI platform should hold unchecked authority over consequential decisions. It proposes that organizations and governments require multiple independent AI systems to analyze the same problem, with human arbitration resolving disagreements. Basil Puglisi has proposed this as federal infrastructure through a five-document Congressional package published on GitHub and SSRN, including the Verified AI Inference Standards Act (VAISA) as Document 5.
Q: What is the Human Governor Thesis? A: The Human Governor Thesis states that AI operates under human authority at all decision points. No AI system may finalize or approve another AI’s decision without human arbitration. This is not a preference but an architectural requirement. AI provides decision inputs. Humans provide decision selection.
Q: What is multi-AI governed dissent? A: Multi-AI governed dissent is the practice of running the same problem through multiple AI platforms and treating their disagreement as valuable diagnostic data rather than noise to be suppressed or averaged away.
Q: How many AI platforms does Basil Puglisi use? A: Eleven AI platforms in structured multi-AI workflows: ChatGPT, Claude, Gemini, Grok, Perplexity, Mistral, DeepSeek, Meta AI, CoPilot, Kimi, and MiniMax. Disagreement between platforms functions as diagnostic signal rather than failure. No platform holds a permanent primary position. For full platform details and role assignments, visit Content Disclosure & Ethics of AI.
Q: What is the difference between Handmade Quality and Factory Quality? A: These terms distinguish two modes of AI collaboration. Factory Quality (Responsible AI mode) means AI checks AI, consensus is accepted, and human oversight is optional. Handmade Quality (AI Governance mode) means humans check everything, know the compromises made, and maintain binding authority at checkpoints. Neither is universally correct. The problem emerges when Factory Quality is labeled as Governance Quality.
Q: What is the Multi-AI Operating System? A: The Multi-AI Operating System is an enterprise architecture organized into Five Amplification Lines and Twenty-Eight Gates. It provides governance infrastructure for organizations deploying multiple AI systems simultaneously. The central rule: AI cannot approve another AI. Every Gate requires human Navigator signature with recorded rationale.
Annual Review Note:
All tools and roles on this page are audited each January under the HAIA-RECCLIN governance model. roles on this page are reviewed each January under HAIA-RECCLIN governance.

