• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Home
  • About Me
    • Teaching / Speaking / Events
  • Book: Governing AI
  • Book: Digital Factics X
  • AI – Artificial Intelligence
    • Ethics of AI Disclosure
  • AI Learning
    • AI Course Descriptions
  • AI Policy

@BasilPuglisi

Content & Strategy, Powered by Factics & AI, Since 2009

  • Headlines
  • My Story
    • Engagements & Moderating
  • AI Thought Leadership
  • Basil’s Brand Blog
  • Building Blocks by AI
  • Local Biz Tips

HAIA-CARCS: Compliance Accountability Record & Case Study

April 23, 2026 by Basil Puglisi Leave a Comment

A Structured Documentation Protocol for Human-Governed AI Work (Read the PDF HERE)

basilpuglisi.com | github.com/basilpuglisi/HAIA – April 2026 | Version 1.33 | Open Access

★ This is a working paper still being developed, released for feedback and collaboration. Feedback from governance practitioners, compliance officers, legal professionals, and researchers is actively invited.

The Documentation Gap in Governed AI Work

Organizations and practitioners are deploying AI at scale, and most governance discussion still centers on what AI should do, what it should not do, and which policy guardrails should contain it. What receives far less attention is the practical question that arrives after the work is finished: what record exists of how the work was done, who decided what, and which evidence shaped the decision?

This question is no longer theoretical. Auditors, regulators, researchers, and internal review teams are asking for record keeping, traceable human oversight, and documented methodology. The EU AI Act raises the pressure in regulated settings, while enterprise compliance programs and professional publishing raise it everywhere else. Any practitioner who has tried to reconstruct a complex multi-platform project two weeks later already knows the operational truth. Preserved conversation histories are not the same thing as a governed record.

The problem is not lack of trace, because AI work leaves plenty of trace. The problem is that those traces are scattered across platforms, trapped in provider-specific formats, and organized around conversation flow rather than around the questions an audit actually asks. An audit asks who decided, on what evidence, at which point, and why. Raw platform histories can answer those questions, but only after someone reads through everything and painstakingly reconstructs the decision logic from fragments spread across sessions and providers. That is not documentation. That is archaeology.

CARCS exists to close that gap.

What CARCS Is

CARCS stands for Compliance Accountability Record and Case Study. It is a structured documentation protocol that transforms the raw evidence of AI work sessions into a portable, organized record suitable for internal review, external audit, academic citation, or publication as a case study.

CARCS is human-triggered. A practitioner runs a prompt at the end of a session or project, the synthesis platform draws on available session evidence, and the result is a ten-section document that captures what happened, who decided, what the evidence was, and what comes next. The practitioner reviews and approves the document before it is finalized. Nothing in the record is finalized until a named human has signed off on it.

CARCS is not a framework and does not govern how AI work is conducted. It documents how AI work was conducted, and the distinction matters because CARCS can apply to any AI workflow, not only those operating under HAIA methodology. Any practitioner or organization doing structured AI work who needs a record of that work can use CARCS. The prompt in the appendix to this paper is deliberately designed without framework-specific terminology so it functions across any context.

Within the HAIA ecosystem, CARCS serves as the human-produced documentation layer for Agent Model 3 operations, where the practitioner orchestrates AI work manually across multiple platforms without automated logging infrastructure. Outside the HAIA ecosystem, CARCS applies wherever AI work needs to be documented, organized, and made legible to anyone who was not present for the work itself.

Why Structured Documentation Matters

Structured documentation matters first at the evidentiary level. When an organization is asked to show that a human reviewed and approved an AI output before action was taken, a conversation history is thin evidence. It shows what was said, but it does not reliably show what was decided. It shows what the model produced, but it does not always show whether the human applied judgment. A structured record that names the arbiter, classifies the decision, and records the rationale belongs to a different evidentiary category. It is not just activity preserved on a screen but governance made legible.

It matters just as much at the operational level. AI work often stretches across multiple sessions, several platforms, and more than one week. Each new session on a platform without durable memory forces the practitioner to reconstruct context, decisions, and current project state before useful work can resume. A CARCS record produced at the end of each session changes that pattern. The next session opens with a governed briefing rather than a memory exercise, which reduces restart time, lowers reconstruction error, and keeps the work compounding forward.

It also matters at the institutional level. Each documented decision, each preserved dissent, and each named act of human arbitration becomes part of the organization’s empirical record of how it actually works with AI. Over time that record says far more about governance maturity than a policy statement ever can. Without that evidence, claims of oversight remain claims. With it, they begin to look like practice.

How CARCS Works

CARCS produces a ten-section draft record from a three-part prompt suite, subject to practitioner completion where evidence or provenance is incomplete. The sections are organized to move from context to evidence to implications to continuity, and the structure is designed so that each section answers a specific question an auditor or reviewer would ask.

Two sections carry particular weight for practitioner completion. Section 5 is only as strong as the explicit decision trace in the underlying record, and Section 10 depends on external provenance data the synthesis platform cannot generate independently. Both sections are flagged for practitioner review when evidence is absent or ambiguous.

The opening sections establish context: what project this is, what session type it represents, what the evidence status of the record is, and what working context existed at session open. The evidence status declaration is particularly important. Not all synthesis platforms have equal access to session history, and CARCS requires the record to declare upfront whether it is drawing on complete project memory, a single session window, partial cross-session memory, or manually assembled evidence.

The middle sections capture the substance of the work: what prompts were dispatched and to which platforms, what the platforms produced including convergence and dissent, and what the human arbiter decided at each point. The dissent requirement deserves emphasis. AI platforms operating in parallel frequently disagree, and those disagreements are often the most valuable signal in the entire session. A synthesis that smooths divergent outputs into a unified summary eliminates precisely the information that governance exists to preserve. CARCS requires dissent to be isolated and documented, not averaged away.

The human arbiter record classifies every decision using a four-type taxonomy drawn from operational practice: corrective override, where the platform made an error the human caught; creative supersession, where the human produced something better than the platform’s best output independently; checkpoint confirmation, where the human actively reviewed and approved rather than passively accepted; and deferred decision, where the human identified an item but chose not to resolve it in this session. This taxonomy matters because it distinguishes between different kinds of human engagement. Governance that cannot tell the difference between a human who caught an error and a human who approved without reading is not governance.

The quality of Section 5 depends directly on decision tracing during the session itself. The recommended practice is to tag governance decisions inline using [T0-OVERRIDE] at the moment they occur. This tag is a searchable anchor that the synthesis platform extracts when generating Section 5, producing a far more accurate record than reconstruction from memory after the fact. A practitioner who tags decisions during a session produces a classified Section 5 without additional effort at record generation time. A practitioner who tags nothing produces a Section 5 flagged for manual completion. Tagging is not mandatory, but its absence is the single most common cause of incomplete Section 5 records.

The governance observations section carries the human arbiter’s direct commentary on what the session showed about the methodology in practice, written in third person with the arbiter named explicitly in the prose. Named attribution is mandatory, because accountability without a name attached to it is not accountability.

The closing sections handle continuity and implications: what the session reveals about the methodologies being used, what the next practitioner needs to know before the next session begins, what items remain open, and what the underlying evidence base for the record is. The raw evidence index connects the CARCS narrative back to the actual platform session records, applying SHA-256 integrity standards where tooling permits and Practitioner Disclosure where it does not. The distinction between hash-verified and attestation-grade records is declared rather than hidden.

Practitioners who want to move from Attestation Grade to Hash Verified before GOPEL infrastructure is in production can do so today with a basic Python script. Exporting the session as a text or JSON file and running a SHA-256 hash against that file using Python’s built-in hashlib library takes under a minute and produces a verifiable fingerprint that can be recorded in Section 10. Any subsequent modification to the exported file produces a hash mismatch, which is the chain-of-custody mechanism the Raw Evidence Index is designed to establish. No specialized tooling is required. A locally computed hash stored in operator-controlled storage is self-attested integrity, not externally verified chain of custody. External verification requires anchoring the hash to a third-party timestamp service or equivalent infrastructure, which GOPEL is designed to provide.

Legal Position and Current Exposure

Any document positioning itself as a compliance artifact must address the question of legal standing, and the honest answer in April 2026 is that the legal terrain is shifting and no guarantee is possible.

In United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026), Judge Jed Rakoff declined to extend attorney-client privilege or work product protection to documents Bradley Heppner generated using Claude, Anthropic’s consumer AI platform, placing significant weight on Anthropic’s privacy policy terms the user had accepted and on the absence of counsel direction. The ruling is the first of its kind and most legal commentary, including Akin Gump and the Harvard Law Review, names Claude specifically because the consumer-versus-enterprise distinction is central to how future cases may receive different treatment. Commentary from Akin Gump (“SDNY Rules Communications With a Public Generative AI Platform Are Not Protected by Attorney-Client Privilege or Work Product Doctrine,” February 2026) and the Harvard Law Review (“United States v. Heppner,” March 2026) has characterized the ruling as fact-specific rather than categorical, suggesting that different configurations, particularly closed enterprise systems or counsel-directed work, may receive different treatment.

The author holds that the court decided Heppner without augmented intelligence theory before it, without applying the expectation of privacy doctrine from Katz v. United States (1967) to the gap between what platform terms technically permitted and what a reasonable practitioner expected, and without examining whether terms of adhesion on platforms that have become functionally essential cognitive infrastructure can constitute knowing waiver. These are arguments that were not made, not arguments the court considered and rejected. The author views these as valid grounds for future challenge as case law develops.

The author further holds that infrastructure changes could possibly shift this risk environment materially. A closed, cryptographically attested, operator-controlled channel for AI inference, of the kind proposed in the Governance Orchestrator Policy Enforcement Layer (GOPEL) specification and the Verified AI Inference Standards Act (VAISA) submitted to the 119th Congress, may address the primary confidentiality exposure that public commercial platform terms currently create. This is the author’s position on a developing situation, offered with conviction and without guarantee.

Practitioners in regulated or legally sensitive contexts should consult legal counsel before treating any CARCS record as a privileged compliance artifact under current infrastructure conditions.

There is also a less-discussed risk that practitioners should weigh before choosing a compliance-grade label. A CARCS record that names the arbiter, classifies decisions, preserves dissent, and documents deferred items is also a roadmap for any challenger seeking to attack those decisions. A Checkpoint Confirmation with thin rationale is worse than no record because it documents shallow review under a label claiming depth. Preserved dissent is discoverable and can be used against the practitioner as evidence that the other platforms warned of a risk and the arbiter approved anyway. This is not a reason to avoid producing records. It is a reason to choose the Evidentiary Use label carefully and to ensure that every Checkpoint Confirmation in Section 5 carries rationale that reflects the actual depth of review. The protocol is designed to produce the most defensible record possible within the constraints of current practice.

Versioning and File Management

Every CARCS document receives a unique versioned filename following the convention HAIA_CARCS_[ProjectIdentifier]_v[Major]_[Minor].md. Version increments are mandatory on every file change without exception. The scale is: a .1 increment for governance or structural changes, meaning any change to content, argument, schema, or sections; and a .01 increment for detail changes, meaning prose, grammar, spelling, voice edits, and formatting corrections. Prior versions are never overwritten. Each revision produces a new file so the complete change history is visible in the filename sequence.

When an error is discovered after approval, the correction procedure is to produce a new versioned file labeled ERRATA in the Document Control block, describe what was wrong and what the correction is, and mark the prior version as superseded with a pointer to the corrected file. A compliance record with known errors and no errata trail is worse than no record.

EU AI Act Context

The EU AI Act’s high-risk AI system provisions are currently scheduled to take full effect on August 2, 2026, though active European Commission proposals under the Digital Omnibus package may extend the high-risk timeline by up to 16 months. Practitioners in EU-regulated contexts should confirm the operative dates with legal counsel before treating August 2026 as a hard deadline. Where the provisions do apply, practitioners deploying AI governance should assess CARCS records against Articles 12 and 14, which address record-keeping and human oversight respectively. CARCS is designed to produce evidence directly relevant to both. Article 11, which addresses technical documentation, requires organizational documentation beyond what CARCS alone provides, and practitioners in EU-regulated contexts should supplement CARCS accordingly.

The HAIA Context

CARCS originated inside the HAIA ecosystem, a body of open-source human-AI governance work developed by Basil C. Puglisi, MPA and published at github.com/basilpuglisi/HAIA under Creative Commons. HAIA stands for Human Artificial Intelligence Assistant, and the ecosystem it names is a layered architecture of governance frameworks, measurement instruments, and operational protocols built around a single governing principle: human judgment must remain at the center of every consequential AI-assisted decision.

The HAIA ecosystem includes several interconnected frameworks. Factics, developed in 2012, is the foundational methodology pairing facts with tactics and measurable outcomes. HAIA-RECCLIN structures how AI platforms respond to human requests, assigning seven functional roles and requiring ten-field structured output. HAIA-CAIPR (Cross AI Platform Review) governs parallel multi-AI orchestration. GOPEL (Governance Orchestrator Policy Enforcement Layer) automates the mechanical operations of CAIPR through seven deterministic steps with zero cognitive work by design, producing hash-chained, append-only audit trails. CBG (Checkpoint-Based Governance) is the constitutional authority layer. HEQ and AIS (Human Enhancement Quotient and Augmented Intelligence Score) measure what human-AI collaboration produces that neither party could produce independently.

The three HAIA operating models define how much automation runs between the human and the AI output. Model 1 runs a full pipeline with a single human checkpoint at the end. Model 2 pauses for human review after each functional role. Model 3, which CARCS was built to serve, runs with no agent and no automated infrastructure. The practitioner orchestrates every step manually, and the gap is structure. CARCS is the structured synthesis layer that closes it.

This paper and the prompt suite in the appendix were designed deliberately for use beyond the HAIA ecosystem. The ten-section schema, the evidentiary grading system, the dissent mandate, and the accountability requirements all function independently of whether the practitioner knows what RECCLIN or CAIPR mean. Practitioners operating within the HAIA ecosystem will find CARCS significantly more powerful because the infrastructure already exists to populate it. They are not adopting a documentation add-on. They are activating the documentation layer that their existing governance practice was already generating the evidence to support.

Appendix A: The CARCS Prompt Suite

The following prompt suite can be run at the end of any AI session or project. It is designed to work on any AI platform without requiring prior knowledge of the HAIA ecosystem. Run it in the platform that has the most complete view of the work. All three prompts are required for a complete record.

Session Tips

Private Evidence Pulls: Upload session exports to the project context before running prompts. Records built from private file stores are more defensible than those relying on public platform memory alone.

Platform Memory Note: Identify your platform’s memory type before running: Full Evidence, Session Limited, Memory Partial, or Practitioner Reconstructed.

Live Session Tip: Tag governance decisions inline with [T0-OVERRIDE] during the session. This creates a searchable anchor for Section 5.

Voice Gate Tip: Write Section 6 observations in third person during the session rather than relying on AI voice conversion afterward.

Pre-Flight Checklist

Complete before running any prompt. If your answers to questions 1 and 2 are No, the resulting record will be Attestation Grade with significant gaps in Sections 5 and 10. Do not classify it as Compliance-Oriented.

  1. Have you tagged governance decisions during the session using [T0-OVERRIDE] or explicit decision language such as “approved,” “rejected,” or “going with X”?
  2. Have you exported or can you access session evidence from all platforms used in this project?
  3. Human arbiter name for Section 6 attribution: ______________________________
    (Write your full name here before running Prompt One. The synthesis platform will use this name in every Section 6 observation. If left blank, Section 6 will be flagged [PRACTITIONER NAME REQUIRED] throughout.)
  4. What is the intended use of this record? (Compliance / Research / Case Study / Continuity)
  5. If the synthesis platform declares Full Evidence, verify by asking it to name at least three prior sessions by date or topic. If it cannot, downgrade the declaration to Memory Partial before proceeding.

Prompt One: Session Identity, Working Context, and Methodology

CARCS PROMPT ONE
GLOBAL RULE: This record is written in third person throughout. No first-person singular anywhere in the document. Your task is to draft the first three sections of a CARCS (Compliance Accountability Record and Case Study) record from the evidence actually available in your active context, accessible memory, or practitioner-supplied materials. Do not claim access to evidence you cannot see. If evidence is missing, partial, or ambiguous, state that explicitly and flag the field for practitioner completion. Before generating, state your memory type as the first line of your output: – Full Evidence: Complete access to all project sessions – Session Limited: Access only to this chat window – Memory Partial: Cross-session memory without full conversation histories – Practitioner Reconstructed: Evidence was manually pasted into this session This declaration sets the evidence status class for the record and tells the reader how much of the underlying work the synthesis platform can actually see. EVIDENCE CEILING RULE: Summarize only what is visible in the evidence available to you. Do not infer unseen sessions, missing decisions, hidden files, or unstated governance actions. If the practitioner appears to have done something that is not explicitly documented, flag it as [PRACTITIONER REVIEW REQUIRED] rather than presenting it as confirmed fact. — SECTION 1: SESSION IDENTITY – Project or case study identifier – CARCS version number (v1.0 for a new record) – Date of this session – Session type: New project, continuation, or project close – Evidence status class – If continuation: which prior CARCS or document was loaded at session open — SECTION 2: WORKING CONTEXT AT SESSION OPEN – What instructions, working agreements, or operating rules were active at the start of this session – Which platforms or tools had those instructions loaded and whether that was confirmed before work began – If continuation: what was confirmed as carried over from the prior session – If no context confirmation was performed, state that explicitly Absence of confirmation is an observable session condition, not a disqualifying failure. — SECTION 3: METHODOLOGY – Prompts dispatched (full text where available, accurate summary where length requires) – Platforms or tools used and how the work was organized across them – What function each platform or tool served for each task, where applicable – Prompt type: structured with fixed parameters (produces source-level diversity) or open with interpretation left to the platform (produces perspective-level diversity) – Whether any ambiguity in the prompt or task was resolved before work began — Stop after Section 3. Wait for review and confirmation before proceeding to Prompt Two.
— End of prompt. Review output before proceeding. —

Prompt Two: Platform Outputs, Human Arbiter Record, and Governance Observations

CARCS PROMPT TWO
GLOBAL RULE: This record is written in third person throughout. No first-person singular anywhere in the document. CONTEXT GUARD: Before generating any section, confirm you have access to Sections 1 through 3 from Prompt One. If those sections are not visible in your active context, state [CONTEXT LOST: RUN PROMPT ONE FIRST] and halt. Do not proceed without them. Continue the CARCS record. Generate Sections 4 through 6, then stop and wait for review before proceeding to Prompt Three. — SECTION 4: PLATFORM OUTPUTS Document what the platforms produced: – Role self-assignments where applicable – Convergence signals: where platforms agreed – Divergence and dissent: where platforms disagreed, with explicit platform attribution DISSENT MANDATE: Do not smooth divergent outputs into a unified summary if organic consensus did not occur. Document all outputs that contradict the majority view. If there is no clear majority, document all outputs verbatim and flag for human review. If all platforms converged unanimously, note that uniformity explicitly as a signal requiring the human arbiter’s assessment and state the number of platforms that contributed to the convergence assessment. For Session Limited records where only one platform contributed, dissent analysis is structurally unavailable. State this explicitly and recommend a multi-platform follow-up dispatch before treating the output as a compliance-grade artifact. – Notable platform-specific behaviors — SECTION 5: HUMAN ARBITER RECORD Document only explicit governance decisions made by the human arbiter that are visible in the available evidence. Search the session context for any text tagged [T0-OVERRIDE] as these are strong decision anchors. If [T0-OVERRIDE] tags are not present, scan for explicit decision language such as phrases like “approved,” “let’s use X instead of Y,” “rejected,” “going with this,” or similar direct statements of choice or rejection and note them as unclassified decisions for practitioner review. Do not infer a decision merely from conversation flow or from the final shape of the output. If a likely decision appears to have occurred but is not explicitly documented, list it under [PRACTITIONER REVIEW REQUIRED] rather than classifying it as a confirmed decision. Classify each decision: – Corrective Override: The platform made an error and the human arbiter caught and corrected it – Creative Supersession: The human arbiter produced something better than the platform’s best output independently – Checkpoint Confirmation: The human arbiter explicitly reviewed and approved the platform output. Silence, continuation, or lack of objection does not count as confirmation. – Deferred Decision: The human arbiter identified an item but chose not to resolve it in this session (moves to Section 9) For each decision record: – Classification type – What was decided – Rationale provided by the human arbiter – Actions considered and not taken, with reasoning – Anchor tag: [EXPLICIT] if the decision is directly stated in the session text, or [INFERRED] if it was reconstructed from context If human arbiter decisions are not visible in the session context, flag this section for practitioner completion and add: ZERO DECISION SIGNALS DETECTED. No tagged or explicit decision signals were found within the synthesis platform’s accessible context window. This flag indicates a detection limit and does not confirm absence of human governance. Practitioner must review and complete Section 5 before finalization. This section cannot be left empty in a finalized CARCS record. — SECTION 6: GOVERNANCE OBSERVATIONS Write this section in third person. The human arbiter must be named explicitly within the prose of every observation involving a human decision. Named attribution is mandatory. Do not use first-person singular. Do not substitute a role description alone. Correct: “Basil C. Puglisi directed the synthesis platform to re-examine the dissenting output before finalizing the convergence finding.” Incorrect: “I directed the platform to re-examine the dissenting output.” Incorrect: “The human arbiter directed the platform.” (name required) If the practitioner’s name is not available, flag the name field as [PRACTITIONER NAME REQUIRED] before finalization. Document: – What this session showed about the methodology in practice – Failure modes caught or avoided – Protocol behaviors under real operating conditions – Any instance where the governance structure changed an outcome If the practitioner provided direct commentary during the session, preserve the substance exactly and rewrite only what is necessary to convert the statement into third-person named attribution. Do not add interpretation, compression, or paraphrase beyond the minimum voice shift required. — Stop after Section 6. Wait for review before proceeding to Prompt Three.
— End of prompt. Review output before proceeding. —

Prompt Three: Methodology Implications, Continuity, Open Items, and Raw Evidence Index

CARCS PROMPT THREE
GLOBAL RULE: This record is written in third person throughout. No first-person singular anywhere in the document. CONTEXT GUARD: Before generating any section, confirm you have access to Sections 1 through 6 from Prompts One and Two. If those sections are not visible in your active context, state [CONTEXT LOST: RUN PROMPTS ONE AND TWO FIRST] and halt. Do not proceed without them. Complete the CARCS record. Generate Sections 7 through 10 and the completion checklist. This is the final prompt. A CARCS record is not complete without all three prompts executed and reviewed. — SECTION 7: METHODOLOGY IMPLICATIONS – Which methodologies, frameworks, or working practices are evidenced by this session – Which need updating or refinement based on what this session produced – Any finding that changes how a working protocol or procedure should be documented – Any new concept, pattern, or principle that emerged and has not yet been formally named — SECTION 8: CONTINUITY RECORD Write this section for the next session, not as a summary of this one. Brief the next practitioner as if they have no prior context: – Which instructions, working agreements, or operating rules should be confirmed active before the first task – Which platforms have governance memory loaded as of this session’s close – Which decisions are open and awaiting disposition – Project state: what is complete, what is in progress, what has not started — SECTION 9: OPEN ITEMS List every unresolved question, deferred decision, and flagged item. For each: – Description – Priority: Critical, High, Standard, or Low – Suggested resolution path Nothing is omitted because it seems minor. — SECTION 10: RAW EVIDENCE INDEX PROVENANCE RULE: Do not invent filenames, URLs, export records, hashes, or access paths. If any provenance field is unavailable in the evidence visible to you, mark it as [MISSING: PRACTITIONER COMPLETION REQUIRED]. A missing provenance field limits the evidentiary strength of the record and must remain visible. For each platform session contributing evidence: – Platform name – Session date – Local export filename or conversation URL (where available) – Evidence access type: directly accessible to synthesis platform, or manually pasted by practitioner – SHA-256 hash of exported file (where tooling permits) SHA-256 hash verification is the target integrity standard. If hashing tooling was not available, state that explicitly. Records without hash verification are classified as Attestation Grade, governed by Practitioner Disclosure rather than cryptographic chain of custody. Both grades may be used, but the record must declare which grade applies so readers do not mistake attestation for cryptographic verification. Hash Verified and Attestation Grade are not equivalent evidentiary states. For Practitioner Reconstructed records, include a Practitioner Disclosure: – Which platform outputs were included – Which were omitted and why – What the practitioner attests about the completeness of the evidence base — CARCS COMPLETION STATUS [ ] All ten sections generated across three prompts [ ] Memory type and evidence status class declared at document opening [ ] Section 4 dissent documented where multi-platform output existed, or section explicitly states dissent analysis unavailable due to single-platform run [ ] Section 5 contains at least one classified decision or is flagged for practitioner completion [ ] Section 6 written in third person with human arbiter named explicitly in prose [ ] Section 8 written for the next session, not as a backward summary [ ] Section 9 populated or explicitly states “No open items identified” [ ] Section 10 integrity grade declared (Hash Verified or Attestation Grade) — PRACTITIONER INSTRUCTION: Review the complete document before finalizing. Section 6 must carry your name as the human arbiter within the prose. Section 10 must declare its integrity grade and identify any missing fields that prevent stronger evidentiary treatment. Before approving, select the narrowest defensible Evidentiary Use label and add it to the Document Control block. Use the decision table below. The model produces the inputs. The practitioner makes the classification. When in doubt, select the narrower label. After selecting the label, add a Limitations paragraph of one to three sentences enumerating the specific evidentiary gaps in this record. Save as: HAIA_CARCS_[ProjectIdentifier]_v1_0.md Never overwrite a prior version. Each revision produces a new file. LEGAL NOTICE: This record may be discoverable in legal or regulatory proceedings. Practitioners in regulated or legally sensitive contexts should consult legal counsel before treating any CARCS record as a privileged compliance artifact. VERSIONING: Use a .1 increment for governance or structural changes and a .01 increment for prose, grammar, or formatting corrections. If an error is discovered after approval, produce a new versioned file labeled ERRATA in the Document Control block. This document is #AIassisted.
— End of prompt. Review output before proceeding. —

Evidentiary Use Decision Table

After generating the complete record, select the narrowest defensible label and add it to the Document Control block. The model produces the inputs. The practitioner makes the classification. When in doubt, select the narrower label.

Evidence Status ClassSections 5 + 10 CompleteSections 5 + 10 Partial or Flagged
Full EvidenceCompliance-Oriented Record with Declared LimitationsCompliance-Oriented Draft Pending Practitioner Completion
Session LimitedResearch / Case Study DraftContinuity and Internal Review Record
Memory PartialResearch / Case Study DraftContinuity and Internal Review Record
Practitioner ReconstructedResearch / Case Study DraftSession Save Point Only

Appendix B: Sources

Katz v. United States, 389 U.S. 347 (1967).

United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026) (written memorandum opinion).

Akin Gump Strauss Hauer & Feld LLP. (2026, February). SDNY rules communications with a public generative AI platform are not protected by attorney-client privilege or work product doctrine. akingump.com

Debevoise & Plimpton LLP. (2026, February 11). SDNY rules AI-generated documents are not protected by privilege. debevoisedatablog.com

Harvard Law Review. (2026, March). United States v. Heppner. Harvard Law Review Blog. harvardlawreview.org

New York State Bar Association. (2026, March 10). Loose AI prompts sink ships: How Heppner shook the legal community. nysba.org

O’Melveny & Myers LLP. (2026, February 24). S.D.N.Y. first-of-its-kind ruling: AI-generated documents are not privileged. omm.com

Paul, Weiss, Rifkind, Wharton & Garrison LLP. (2026, February 20). SDNY court considers whether AI-generated documents are subject to privilege protections. paulweiss.com

Proskauer Rose LLP. (2026, February 18). SDNY addresses privilege and work product implications of using unsecured public AI tools. proskauer.com

European Parliament & Council of the European Union. (2024). Regulation (EU) 2024/1689 (Artificial Intelligence Act). eur-lex.europa.eu

IAPP. (2026, February 3). European Commission misses deadline for AI Act guidance on high-risk systems. iapp.org

Puglisi, B. C. (2012). Digital Factics (1st ed.). Digital Ethos Press.

Puglisi, B. C. (2025). Digital Factics (2nd ed.). Digital Ethos Press.

Puglisi, B. C. (2025). Governing AI: When capability exceeds control (ISBN 9798349677687). Digital Ethos Press.

Puglisi, B. C. (2026a). Checkpoint-based governance: A constitution for human-AI collaboration (v5.0). basilpuglisi.com.

Puglisi, B. C. (2026b). GOPEL: Governance orchestrator policy enforcement layer (v1.5). basilpuglisi.com.

Puglisi, B. C. (2026c). HAIA ecosystem [Open-source framework, Creative Commons]. github.com/basilpuglisi/HAIA

Puglisi, B. C. (2026d). HAIA-RECCLIN: Human-AI governance methodology (3rd ed.). basilpuglisi.com.

Puglisi, B. C. (2026e). The human enhancement quotient: Measuring cognitive amplification through AI collaboration. Social Science Research Network. ssrn.com/abstract=6583419

Puglisi, B. C. (2026f). AI provider plurality and the Verified AI Inference Standards Act (VAISA). Social Science Research Network. ssrn.com/abstract=6195238

All URLs confirmed active as of April 2026. Readers seeking independent verification of United States v. Heppner may search the PACER federal court records system using docket No. 25-cr-00503-JSR.

#AIassisted

FAQ

Q1: What is CARCS? A1: CARCS stands for Compliance Accountability Record and Case Study. It is a structured documentation protocol that transforms raw AI session evidence into a portable, governed record suitable for internal review, external audit, academic citation, or publication as a case study. It is human-triggered, requires named human sign-off before finalization, and works on any AI platform without requiring prior knowledge of the HAIA ecosystem.

Q2: Why does AI work need a documentation protocol beyond conversation history? A2: Conversation histories answer audit questions only after someone reads through everything and reconstructs decision logic from fragments. An audit asks who decided, on what evidence, at which point, and why. Raw platform histories are organized around conversation flow, not audit questions. CARCS closes that gap by producing a ten-section record that answers audit questions directly.

Q3: Do you need the HAIA ecosystem to use CARCS? A3: No. The prompt suite in the appendix was deliberately designed without HAIA-specific terminology so it functions across any AI workflow. A compliance officer at a hospital, a lawyer reviewing contracts, or a marketing team running AI-assisted campaigns can run CARCS without knowing what RECCLIN or CAIPR mean. HAIA practitioners benefit more because the infrastructure already exists to populate the record, but the protocol is designed for external use.

Q4: What is the difference between Attestation Grade and Hash Verified records? A4: Attestation Grade records rely on practitioner disclosure about what was included and omitted. Hash Verified records include a SHA-256 cryptographic fingerprint of the exported session file. A locally computed hash takes under a minute with Python’s hashlib library and is not externally verified chain of custody, but any subsequent modification to the file produces a detectable mismatch. Hash Verified and Attestation Grade are not equivalent evidentiary states and the record must declare which applies.

Q5: What does the Heppner ruling mean for CARCS records? A5: In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), Judge Jed Rakoff declined to extend attorney-client privilege to documents generated on Claude’s consumer platform, placing weight on Anthropic’s privacy policy terms and the absence of counsel direction. Commentary from Akin Gump and the Harvard Law Review characterizes the ruling as fact-specific rather than categorical. Practitioners in regulated contexts should consult legal counsel before treating any CARCS record as a privileged artifact. The author also notes that a governed record is double-edged: preserved dissent and classified decisions are discoverable and can be used against the practitioner.

Q6: What is the working paper status of CARCS? A6: CARCS v1.33 is a working paper released for feedback and collaboration. The methodology is complete and the prompt suite is ready to use today. Feedback from governance practitioners, compliance officers, legal professionals, and researchers is actively invited to inform future versions.

Share this:

  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Facebook (Opens in new window) Facebook
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Policy & Research, Thought Leadership, White Papers Tagged With: AI documentation protocol, AI Governance, audit trail, CARCS, Checkpoint-Based Governance, compliance documentation, EU AI Act, HAIA, HAIA-CARCS, Heppner ruling, human oversight, SHA-256, Working Paper

Reader Interactions

Leave a Reply Cancel reply

You must be logged in to post a comment.

Primary Sidebar

Buy the eBook on Amazon

Multi-AI Governance

HAIA-RECCLIN Reasoning and Dispatch Third Edition free white paper promotional image with 3D book mockup and download button, March 2026, basilpuglisi.com

SAVE 25% on Governing AI, get it Publisher Direct

Save 25% on Digital Factics X, Publisher Direct

Digital Factics X

For Small Business

Facebook Groups: Build a Local Community Following Without Advertising Spend

Turn Google Reviews Smarter to Win New Customers

Save Time with AI: Let It Write Your FAQ Page Draft

Let AI Handle Your Google Profile Updates

How to Send One Customer Email That Doesn’t Get Ignored

Keep Your Google Listing Safe from Sneaky Changes

#SMAC #SocialMediaWeek

Basil Social Media Week

Legacy Print:

Digital Factics: Twitter

Digital Ethos Holiday Networking

Basil Speaking for Digital Ethos
RSS Search

@BasilPuglisi Copyright 2008, Factics™ BasilPuglisi.com, Content & Strategy, Powered by Factics & AI,

%d