Where This Fits
GOPEL (Governance Orchestrator Policy Enforcement Layer) sits in the middle of a four-layer adoption ladder built over three years of operational practice: Factics provides the foundational methodology connecting facts to tactics and measurable outcomes. HAIA-RECCLIN provides the seven-role framework for human-AI collaboration with distributed authority across multiple AI platforms. HAIA-CAIPR provides the cross-platform review protocol that catches what no single platform catches alone. GOPEL provides the non-cognitive enforcement layer that makes governance deterministic rather than aspirational.
Above GOPEL sits Checkpoint-Based Governance (CBG v4.7), the constitutional authority layer that governs the human arbiter’s authority and accountability. CBG answers who decides, by what authority, at what checkpoint. GOPEL answers how those decisions are enforced, logged, and made tamper-evident.
The two extensions published today strengthen GOPEL at the cryptographic and privacy layers without changing anything above or below. CBG still governs, RECCLIN still assigns roles, CAIPR still reviews, and Factics still pairs facts to tactics. What changes is that GOPEL now enforces with stronger cryptographic foundations and a governed approach to the most vulnerable moment in any AI workflow: the moment data leaves the orchestrator’s custody and enters an external platform.
The Two Gaps That Remained
GOPEL’s seven deterministic operations enforce governance before execution, not after. The Pause operation stops at preconfigured checkpoint gates before state mutation proceeds. The Hash operation computes SHA-256 cryptographic binding at each operation. The policy engine defaults to deny with single-veto blocking. The architecture has been adversarially reviewed by seven independent AI platforms, with a proof of concept running 183 unit and integration tests across 14 Python modules verified for non-cognitive compliance by a static analyzer.
Two gaps remained in the published specification. The cryptographic foundation uses digital signature algorithms that will become forgeable when quantum computing matures. And GOPEL protects data before dispatch and after collection but had no mechanism governing what happens while an AI platform actively processes a governed prompt.
Both gaps are now addressed at the specification level. The quantum signature gap is closed through cryptographic transition to NIST post-quantum standards, while the privacy-during-computation gap is governed through structural enforcement, honest documentation, and mandatory human accountability at every decision point where verification evidence isn’t available. Both extensions are published, adversarially reviewed, and committed to the public repository with immutable timestamps.
The distinction matters. Closing a gap means the vulnerability is eliminated by the specification. Governing a gap means the vulnerability is classified, documented, made visible, and placed under human arbitration. GOPEL does the first where the technology permits and the second where it does not, and it tells the truth about which is which.
Confidential Processing Extension (CPE)
GOPEL Post Quantum Cryptographic Agility Amendment
Gap One: The Quantum Clock
GOPEL’s audit trail uses SHA-256 hash chaining for tamper evidence and Ed25519 or ECDSA P-256 for digital signatures binding human identity to governance decisions. The hash chain is quantum-resistant. SHA-256 under Grover’s algorithm retains approximately 128-bit effective security against preimage attacks, which remains computationally prohibitive for any currently projected quantum capability.
The digital signatures are not. Both Ed25519 and ECDSA rely on elliptic curve mathematics that Shor’s algorithm solves efficiently on a cryptographically relevant quantum computer. An adversary collecting signed audit records today could produce fraudulent signatures after quantum capability matures. NIST distinguishes this specific risk from the classic “Harvest Now, Decrypt Later” (HNDL) confidentiality threat (NIST IR 8547). HNDL remains a real threat for encrypted confidential data, but for governance audit trails, the primary risk is not decryption. It is future forgery of authentication credentials on historical records: archival signature forgery.
The signature integrity problem has a compound variant. An adversary who compromises a historical classical signing key could rewrite the audit chain from a point of alteration forward, regenerating all subsequent hashes and producing a valid but fraudulent alternative history. The internal hash chain alone cannot distinguish the authentic chain from the rewritten one without an external reference point.
What the Post-Quantum Amendment Does
The GOPEL Post-Quantum Cryptographic Agility Amendment (v1.2) adds a three-tier signature classification to the specification.
Tier A (Classical, Legacy and Short-Retention) keeps the current Ed25519 and ECDSA algorithms as the minimum acceptable configuration. Tier A is strictly for deployments where the quantum threat timeline exceeds the retention period of the signed records. It isn’t recommended for new deployments beginning in 2026 and should be understood as a transitional classification, not a long-term posture.
Tier B (Hybrid, Recommended Default) produces a non-separable composite signature containing both a classical component and a post-quantum component (ML-DSA, NIST FIPS 204, finalized August 2024), bound through explicit domain separation per the IETF composite signature draft. Both components must verify against the composite algorithm identifier. The non-separable construction prevents stripping attacks where an adversary substitutes one component after the other becomes forgeable. Tier B is the recommended default for all new deployments, and it functions as a migration hedge: the classical component provides continuity with existing verification tooling while the post-quantum component provides forward security.
Tier C (Post-Quantum Primary, Preferred Long-Term Target) uses ML-DSA as the sole signing algorithm, with SLH-DSA (NIST FIPS 205) available as a hash-based resilience fallback. SLH-DSA carries substantially larger signatures (up to 49KB depending on parameter set), making it a resilience option with real storage cost, not a neutral substitute. Tier C is the preferred target when NIST moves classical signatures toward disallowance after 2035.
The amendment specifies ML-DSA-65 (NIST Security Category 3) in deterministic signing mode as the recommended parameter set, with documented rationale comparing ML-DSA-44 (lighter, shorter retention) and ML-DSA-87 (maximum security, critical infrastructure). It requires dual-key management with separate HSM partitions so that compromise of one key does not grant access to the other. It prohibits reuse of component keys in standalone signing contexts outside the governance workflow.
For the certificate chain, the amendment requires that CA certificates be signed at the same tier or higher as the audit records they certify. A hybrid audit signature verified against a classical-only CA certificate creates a weakest-link vulnerability that the amendment closes.
Securing Historical Records
Historical records signed under classical algorithms carry a residual archival forgery risk. The amendment names this explicitly as a bounded residual risk, not as a solved problem. It specifies three mitigation mechanisms.
External hash chain anchoring periodically publishes the current chain tip hash to a quantum-resistant external source of truth: a Trusted Timestamping Authority signing with ML-DSA, a post-quantum public ledger, or a qualified public publication medium meeting minimum admissibility criteria (append-only retention, independent witnessability, cryptographic or institutional timestamping, and externally retrievable proof). Each anchoring event creates an immutable checkpoint that bounds the rewrite exposure window to the interval between anchors. For high-risk deployments under EU AI Act classification, the maximum interval is seven days.
A mandatory post-quantum notarization checkpoint at migration commits the entire pre-migration chain state under a quantum-resistant signature, creating a cryptographic boundary between the classical and post-quantum eras of the audit trail without re-signing every historical record.
Verification tooling must display a visible indicator when processing historical Tier A records, flagging them as classical-only and quantum-vulnerable.
Migration Triggers
Migration triggers are keyed to specific NIST milestones (IR 8547, SP 800-131A Rev. 3 draft), not vague future events.
Trigger 1 (2026): All new GOPEL deployments should operate at Tier B minimum. This is a GOPEL-specific policy recommendation.
Trigger 2 (2030): When NIST finalizes deprecation of 112-bit classical digital signatures, all existing Tier A deployments must present a documented migration plan within 90 days. This trigger functions as a governance planning checkpoint for all Tier A deployments, even those using 128-bit algorithms (Ed25519, ECDSA P-256), ensuring migration planning begins well before the 2035 hard deadline.
Trigger 3 (2035): When NIST moves classical digital signatures toward disallowance, all active deployments must operate at Tier B minimum (Tier C preferred). Tier A is no longer acceptable for new records.
These triggers are based on draft NIST guidance as of March 2026 and must be rechecked against finalized publications. If NIST milestones shift, the triggers shift with them. The specification ties to published milestones, not to fixed calendar dates divorced from the standards body.
What the Review Found
Independent AI platforms adversarially reviewed the amendment across two CAIPR rounds, totaling ten reviews from five platforms (Gemini, Kimi, DeepSeek, Grok, ChatGPT), and every platform confirmed the architecture is sound. DeepSeek found that the original draft did not address storage impact from larger post-quantum signatures. Gemini flagged that historical records need external anchoring to prevent full-chain rewrite attacks. ChatGPT provided the deepest technical contribution: the HNDL terminology correction, the non-separable composite profile requirement, the ML-DSA parameter justification, the signing mode specification, and the NIST milestone-based triggers replacing a vague “when NIST issues deprecation guidance” placeholder.
Every dissent was either resolved in the specification text or explicitly named as bounded residual risk with documented mitigation, and nothing was smoothed over.
The transition from quantum-safe audit trails to the second gap is direct: the post-quantum amendment secures the permanence of the evidence chain, while the second gap addresses the vulnerability of the data itself at the moment of maximum exposure.
Gap Two: The Invisible Moment
GOPEL’s audit trail secures governed data at two points. Before dispatch, data sits in hash-chained, digitally signed records. After collection, responses enter the same chain. Between those two points, custody transfers to an external AI platform. The platform receives the prompt in cleartext, processes it through its inference stack (CPU, GPU VRAM, attention key-value cache, speculative decoding buffers, internal logging), and returns a response. GOPEL has zero visibility into what happens during that interval.
GDPR Article 25 requires privacy by design during processing, not just at rest and in transit. The EU AI Act Article 10 addresses data governance for high-risk systems, primarily focused on training, validation, and testing data rather than runtime prompt confidentiality during inference, but data governance completeness requires accounting for the inference pathway. DORA strengthens the case for recording what evidence exists about computation conditions through its audit trail and operational resilience requirements.
The legal exposure is real, and GDPR Article 25 is the strongest anchor. The gap matters most when governed workflows contain personal data, regulated financial data, health information, or classified material. Sector-specific regulatory frameworks (HIPAA for healthcare, SOX and GLBA for financial services, sector-specific AI regulations as they emerge) will map to this gap through their own privacy-during-processing requirements. The specific regulatory mapping is deployment-specific and belongs in the deploying organization’s data governance policy, not in the GOPEL specification.
The Honest Position
GOPEL cannot fully close the privacy-during-computation gap at the orchestration layer for opaque third-party AI platform APIs.
That position is the foundation of the Confidential Processing Extension, not a caveat at the end. When GOPEL dispatches a prompt to an external platform that provides no remote attestation or equivalent verifiable evidence, the privacy status of that computation is unknown. GOPEL can govern what happens before and after, but it can’t verify what happens during. No orchestration mechanism can substitute for platform-level trustworthiness.
A specification that claims to close this gap when it cannot creates compliance misrepresentation risk worse than the gap itself. The CPE builds every governance control on the honest foundation that the gap is managed through structural enforcement, not eliminated through technical proof.
What the Confidential Processing Extension Does
The GOPEL Confidential Processing Extension (CPE v1.1) introduces a four-profile classification that ensures every governed dispatch carries a deterministic privacy status, with no dispatch passing through ungoverned, no scenario falling outside classification, and defaults that fail closed.
Profile 0 (Opaque External Processing) applies when the platform offers no attestation evidence. GOPEL logs the absence of privacy evidence as an Unverified Processing Record. Sensitive data to opaque endpoints triggers a mandatory Pause gate with a provisional profile assignment. The human arbiter receives the checkpoint package with an explicit notation that no privacy-during-computation evidence is available, then confirms, upgrades, or rejects the dispatch before any data leaves GOPEL’s custody. The report status reads “UNVERIFIED DURING COMPUTATION.” The gap isn’t closed, but it is governed, documented, and accountable.
Profile 1 (Attested Confidential Inference) applies when the platform runs inference inside a hardware-enforced Trusted Execution Environment (TEE) and exposes remote attestation. GOPEL performs four deterministic binary checks before dispatching: signature verification against the hardware vendor’s root certificate, enclave measurement comparison against a pre-approved allowlist, freshness nonce verification to prevent replay attacks, and TCB version check against minimum requirements. All checks are binary, and none evaluate content.
Profile 1 reports two evidence grades. “VERIFIED ATTESTED ENVIRONMENT” when attestation passes but no signed inference receipt exists; the compute environment is verified, but the binding between the attestation and the specific transaction rests on temporal correlation. “VERIFIED CONFIDENTIAL PROCESSING” when attestation passes and a signed receipt binds the specific transaction to the attested environment through matching input hashes. The stronger grade confirms both the environment and the specific transaction, while the weaker grade confirms the environment only; neither overclaims.
In the strongest configuration, GOPEL encrypts the prompt to the enclave’s public key, and the decryption key is released only after attestation succeeds against a key management service (AWS KMS with Nitro Enclave conditions, Azure Attestation with policy-driven key release). The platform never receives plaintext outside the enclave boundary.
Profile 2 (Minimized External Processing) applies deterministic tokenization before dispatching to an opaque endpoint. Pre-compiled regex patterns match structured PII formats (SSN, email, phone, account number, date, IP address) and replace them with consistent placeholder tokens. Named field replacements handle structured prompts. The token map never leaves GOPEL’s custody, so the platform processes a minimized representation. The report status reads “DISCLOSURE MINIMIZED, CONFIDENTIAL PROCESSING NOT VERIFIED.”
Profile 2 provides partial minimization of structured sensitive data, not comprehensive anonymization. Sensitive information that does not match a pattern rule passes through in cleartext. Contextual inference from surrounding text may allow reconstruction of tokenized values. The profile reduces what is exposed but doesn’t verify computation conditions.
Profile 3 (Cryptographic Experimental) provides governance structure for Fully Homomorphic Encryption and Secure Multi-Party Computation workloads on custom research stacks. Neither technology is deployable at production scale against standard LLM APIs today. FHE imposes computational overhead several orders of magnitude above cleartext inference for transformer architectures. SMPC requires custom multi-party infrastructure that does not exist in the commercial LLM ecosystem. This profile can’t be claimed as production compliance infrastructure. It exists so the specification framework is ready when the technology matures, estimated at a minimum five-year horizon.
The Catch-All
Every GOPEL dispatch maps to exactly one profile using a matrix of endpoint capability (attested, opaque, experimental) and data sensitivity classification (public, internal, confidential, regulated). Missing endpoint capability flags default to opaque, and missing data sensitivity labels default to regulated. Confidential or regulated data routed to an opaque endpoint receives a provisional profile and a mandatory Pause gate. The human arbiter resolves the assignment before any data leaves custody. Regulated data to an experimental endpoint is blocked entirely.
There is no fifth option, and no dispatch passes through unclassified because the defaults fail closed and every scenario is covered.
What the Review Found
Independent AI platforms adversarially reviewed the CPE across two rounds, totaling twelve reviews from six platforms (Claude, Gemini, Grok, DeepSeek, Kimi, ChatGPT). All six converged on TEE attestation as the primary deployable answer. All six preserved the dissent that the gap cannot fully close at the orchestration layer. ChatGPT contributed the four-profile classification structure that became the backbone of the extension, the IETF RATS/EAT/AIR evidence model reference, the legal framing correction (GDPR Article 25 as primary anchor, not AI Act Article 10), and the normative rule that GOPEL cannot truthfully certify what it cannot verify. Kimi provided the fullest implementation specification with YAML config, JSON audit schemas, and a three-phase deployment path. Gemini identified the GPU VRAM boundary limitation. Grok identified NVIDIA H100 Confidential Computing as production-ready TEE with GPU memory protection.
In the second round, ChatGPT caught that the profile assignment matrix was not fully deterministic (some cells resolved only after human arbitration), that Profile 1’s report status was overstated when no signed receipt existed, and that the compliance table used “satisfied” language that a technical specification cannot claim as legal fact. All three were corrected: mandatory Pause with provisional profile assignment, split evidence grades, and evidentiary posture language.
What Both Extensions Share
Neither extension adds cognitive work to GOPEL, and every check is binary and deterministic: signatures verify as valid or invalid, measurements match or mismatch, attestations register as present or absent, patterns match or don’t, receipts arrive or they don’t, and input hashes confirm or fail. GOPEL performs zero content evaluation before, during, or after these operations.
Both extensions state their limitations in the specification text, not in a footnote. The post-quantum amendment names historical classical signatures as a residual risk, and the confidential processing extension opens with the governing position that the gap cannot fully close at the orchestration layer. Both build every governance control on honest foundations rather than overclaimed closure.
Independent AI platforms adversarially reviewed both extensions under the HAIA-CAIPR protocol, where each platform reviews independently without access to the others’ findings. Convergence and dissent are documented, and the review record, including which platform found which issue and how it was resolved, is part of the published specification. Both extensions are committed to the public repository with immutable Git timestamps.
The Adoption Reality
Specifications are not the same as deployments. Publishing a post-quantum signature tier doesn’t mean every organization’s HSM firmware supports ML-DSA key generation today, and publishing a confidential processing profile doesn’t mean every AI platform exposes attestation endpoints today.
The deployment path is phased to match infrastructure reality.
For the post-quantum amendment, Phase 1 is immediate: configure new deployments for Tier B hybrid signatures where HSM and PKI infrastructure support ML-DSA. Where they do not, document the migration plan and anchor the classical chain externally. The hardware already exists: AWS, Azure, and Google support confidential computing instances, and HSM vendors are adding post-quantum algorithm support. The bottleneck is organizational PKI readiness, not technology availability.
For the confidential processing extension, Phase 1 is also immediate and requires no platform cooperation: classify every dispatch, apply tokenization rulesets to sensitive data headed for opaque endpoints, trigger human arbitration for regulated data without attestation evidence. This runs today against any LLM API without a single platform-side change. Phase 2 (2026-2027) adds Profile 1 attestation for platforms operating in confidential computing environments, which requires platform cooperation but uses infrastructure that major cloud providers already offer. Phase 3 (2027-2028) makes Profile 1 the default for all sensitive data as confidential AI inference becomes standard practice.
The cost is real. HSM partitions for dual-key management carry procurement and operational overhead, external anchoring to a Trusted Timestamping Authority requires either a commercial TSA contract or self-operated TSA infrastructure, tokenization rulesets require data governance teams to define and maintain pattern libraries, and attestation trust stores require security teams to manage measurement allowlists and track vendor security bulletins. None of this comes free.
The cost of not doing it is also real. GDPR Article 25 penalties reach 20 million euros or 4% of global turnover, EU AI Act Article 9 penalties reach 35 million euros or 7% of global turnover, and DORA compulsion payments run at 1% of daily turnover for ongoing noncompliance. The specification extensions produce the auditable evidence that shows governance controls exist, and the absence of that evidence is what regulators find when they look.
What This Means for the Governance Lifecycle
GOPEL now governs and documents at every phase of the AI workflow.
Before execution: Checkpoint-Based Governance (CBG v4.7) establishes human authority at defined decision points. HAIA-RECCLIN assigns functional roles across multiple AI platforms. GOPEL enforces the checkpoint gates with default-deny policy and single-veto blocking.
During execution: The Confidential Processing Extension classifies every dispatch, verifies attestation evidence where available, reduces disclosure through tokenization where attestation is unavailable, and triggers mandatory human arbitration when sensitive data reaches unverified environments. Evidence is produced at every layer, and the gaps that remain are documented rather than invisible.
After execution: The Post-Quantum Cryptographic Agility Amendment ensures the audit trail remains tamper-evident and the signatures binding human identity to governance decisions remain unforgeable as the computing threat environment evolves. External anchoring secures historical records against chain-rewrite attacks.
“Full lifecycle” doesn’t mean “fully verified at every phase.” It means every phase carries governance controls, audit evidence, and human accountability. Where verification evidence exists (attested endpoints, quantum-resistant signatures), the governance is backed by cryptographic proof. Where it does not (opaque endpoints, historical classical signatures), the governance is backed by documented risk, mandatory human arbitration, and honest reporting that names what cannot be verified.
That distinction is the difference between governance infrastructure and governance theater: infrastructure tells the truth about what it can and cannot do, while theater claims certainty it cannot deliver.
Why This Matters Now
The AI governance space is filling with new practitioners making architectural claims about deterministic enforcement, cryptographic proof, and privacy during computation. Multiple voices are arriving independently at the principle that enforcement must resolve before state mutation, that audit logs are not enforcement, and that governance requires structural verification rather than voluntary compliance. That independent convergence validates the problem, but it doesn’t validate any particular solution.
The differentiator is not the principle. It is the published specification with operational evidence, adversarial review, documented limitations, and a federal implementation roadmap. Every identified gap in the GOPEL specification is either resolved through cryptographic transition or governed through structural enforcement with documented residual risk. Every limitation is named. Every human decision is documented. Every review finding, including which platform found it and how it was resolved, is part of the published record.
Published specifications with adversarial review, documented limitations, and honest reporting are what governance infrastructure looks like.
Fact, Tactic, KPI
Fact: Classical digital signatures on GOPEL audit records are vulnerable to quantum-era forgery. Tactic: Migrate all new deployments to Tier B hybrid signatures by the end of 2026, with quarterly external hash chain anchoring for historical records. KPI: 100% of new audit chains operating under Tier B within 12 months; zero unanchored classical chains exceeding 90 days.
Fact: GOPEL has zero visibility into AI platform computation during inference. Tactic: Classify every dispatch under the four-profile CPE matrix, implement tokenization for all sensitive dispatches to opaque endpoints, and require Profile 1 attestation for all regulated data. KPI: Zero unclassified dispatches within 30 days of deployment; 95% or higher of sensitive dispatches routed through Profile 1 or Profile 2 within 12 months; fewer than 2% of dispatches requiring human override escalation.
Fact: Adversarial review by independent AI platforms catches vulnerabilities that single-platform development misses. Tactic: Run HAIA-CAIPR reviews for all specification extensions before publication, documenting convergence and dissent. KPI: All published specification extensions carry documented multi-platform review records with zero unresolved critical findings.
FAQ
What is the GOPEL Post-Quantum Cryptographic Agility Amendment?
A specification extension that future-proofs GOPEL’s audit trail against quantum computing. It adds three signature tiers (classical, hybrid, post-quantum primary) using NIST standards finalized in August 2024, with migration triggers tied to published NIST milestones for 2026, 2030, and 2035.
Why does GOPEL need post-quantum cryptography if the hash chain is already quantum-resistant?
The hash chain protects sequence integrity and is quantum-resistant. The digital signatures binding human decisions to governance records are not. The amendment protects the signing layer while the hash chain continues to protect the ordering.
What is the Confidential Processing Extension?
A deterministic framework that forces every AI platform dispatch to carry a verified privacy profile, where attested endpoints provide hardware-backed evidence and opaque endpoints trigger mandatory human arbitration for sensitive data. No dispatch passes through unclassified, and defaults fail closed.
Can GOPEL guarantee privacy during computation?
No. No orchestration layer can guarantee what happens inside an external platform’s compute environment. The CPE produces the strongest available evidence through TEE attestation and signed inference receipts, and it governs every dispatch where that evidence is unavailable through classification, tokenization, and mandatory human arbitration.
Where are the specifications published?
Both specifications are committed to the public repository at github.com/basilpuglisi/HAIA/tree/main/haia_agent with immutable Git timestamps.
What AI platforms were used for adversarial review?
The post-quantum amendment was reviewed by Gemini, Kimi, DeepSeek, Grok, and ChatGPT across two CAIPR rounds (ten total reviews). The confidential processing extension was reviewed by Claude, Gemini, Grok, DeepSeek, Kimi, and ChatGPT across two rounds (twelve total reviews). Zero rejections across all rounds.
Leave a Reply
You must be logged in to post a comment.