• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Home
  • About Me
    • Teaching / Speaking / Events
  • Book: Governing AI
  • Book: Digital Factics X
  • AI – Artificial Intelligence
    • Ethics of AI Disclosure
  • AI Learning
    • AI Course Descriptions
  • AI Policy

@BasilPuglisi

Content & Strategy, Powered by Factics & AI, Since 2009

  • Headlines
  • My Story
    • Engagements & Moderating
  • AI Thought Leadership
  • Basil’s Brand Blog
  • Building Blocks by AI
  • Local Biz Tips

The Loop That Ate the Governor

March 2, 2026 by Basil Puglisi Leave a Comment

When “Human in the Loop” Becomes “Human Lost in the Queue”

A Case Study in Governance Architecture Failure

The Argument

Every major AI governance framework in circulation today includes some version of the same assurance: a human remains in the loop. The EU AI Act requires it in Article 14. The NIST AI Risk Management Framework lists it as a core function. Corporate AI policies cite it as their primary safety mechanism. Anthropic’s constitution for Claude establishes alignment guardrails and behavioral constraints as foundational. OpenAI’s deployment principles invoke it. The phrase has become the universal credential for responsible AI deployment.

None of these frameworks ask the harder question. Not whether a human occupies a position in the loop, but whether the loop is structurally designed to recognize human authority when that human actually exercises it.

This paper documents a case where the answer was no. The human occupied the loop. The human submitted input at the checkpoint. The system processed the human’s authority as indistinguishable from any other input stream, assigned it no special weight, and in multiple instances dropped it entirely. This happened across two independent AI platforms over a period of months. And the human, operating the governance framework designed to prevent exactly this failure, assumed the error was on his end rather than the system’s.

That last sentence is the finding. Not that AI systems failed to preserve human authority. That the human stopped trusting his own authority when the system failed to acknowledge it. Automation bias, running in reverse.


How the System Works

To understand the failure, you need to understand the workflow it broke.

The HAIA-RECCLIN framework (2026 Edition; Tier 2: built, operated, and documented as a working concept) governs multi-AI collaboration through seven defined roles operating under Checkpoint-Based Governance (CBG, Tier 2). The constitutional rule is simple: no AI system may finalize or approve another AI’s decision without human arbitration. If AI platforms disagree, the human decides. Human override authority is absolute and requires no justification to the machines.

These rules have been tested across eleven AI platforms, validated through documented decision cycles, and published in academic working papers, Congressional policy packages, and a book. The architecture is not theoretical. It runs daily.

One protocol within this architecture is structured multi-AI feedback. A research prompt goes out to multiple AI platforms simultaneously. Each returns its findings independently. The human operator then submits every platform response into a single synthesis session, along with the human’s own analysis, which holds the highest evidentiary authority in the governance hierarchy. The synthesizing platform produces a convergence report: what the platforms agree on, where they disagree, and what needs human arbitration.

The protocol instruction for the synthesis session is simple. Respond “got it” to each paste until the operator types “done,” then synthesize all inputs.

That instruction is where the governance architecture failed.


What Went Wrong

The protocol treats every input identically. When a response arrives labeled “Perplexity,” the system logs it as one AI perspective among several. “Grok,” “ChatGPT,” “Gemini,” same treatment. The system accumulates inputs, waits for “done,” then synthesizes.

When the human operator’s own analysis arrives labeled “Human,” the system processes it the same way. The word “Human” registers as a label, equivalent to a platform name. The content enters the same queue, at the same evidentiary tier, with the same weight as any AI output.

In practice, the outcome was worse than equal weighting. Because no platform in the expected synthesis list is called “Human,” the input was either absorbed without attribution or dropped entirely. The human arbiter’s checkpoint input, the single highest authority data point in the entire governance architecture, received less processing integrity than the lowest scoring platform in the same synthesis cycle, as measured by the operator’s internal quality rubric.

This was not a one-time glitch. It occurred in both Claude (Anthropic) and ChatGPT (OpenAI) over multiple feedback cycles spanning months. Same protocol instruction. Same failure mode. The human input vanished.

And when the failure was finally identified and the operator went looking for the lost input, no record existed. No audit artifact, no ingestion log, no confirmation receipt distinguishing “received and processed” from “never arrived.” The operator could not tell whether the input had been submitted and dropped or never delivered at all. The governance architecture caught the failure, but it caught it after the evidence was already gone.


The Part Nobody Talks About

The documented literature on automation bias describes humans over-trusting AI outputs, deferring to machine recommendations even when their own judgment says otherwise. Skitka, Mosier, and Burdick (1999) established the pattern in simulated flight tasks: participants with automated aids made errors of both omission and commission, following automated recommendations even when those recommendations contradicted their training and other valid indicators. The European Data Protection Supervisor’s 2025 TechDispatch extends this finding across clinical, legal, and administrative decision-making contexts. The pattern is well-established: when AI systems present confident outputs, humans accept them.

This case documents the inverse.

The human submitted governance input at the designated checkpoint. The system did not acknowledge it. Did not reject it. Simply processed it as if it were another AI response, then synthesized around it or through it without preserving its authority.

The human’s response was not to investigate the system. He assumed the error was his. Maybe the input was not submitted correctly. Maybe a connection error occurred. Maybe the format was wrong. For months, the human attributed the failure to personal technical error rather than architectural deficiency.

This is reverse automation bias. Instead of over-trusting the machine’s output, the human under-trusted his own input when the machine failed to recognize it. The governance framework was designed to ensure human authority remains absolute. The protocol’s structural design produced the opposite outcome: the human questioned his own authority rather than questioning the system’s failure to preserve it.

The phenomenon connects to established research on trust calibration in automated systems. Parasuraman and Riley (1997) documented that humans can both misuse automation through over-reliance and disuse it through under-reliance, depending on how the system signals reliability. Dietvorst, Simmons, and Massey (2015) showed that after observing an algorithm err, people avoid the algorithm even when it outperforms human judgment, a pattern they term algorithm aversion. What this case adds is a third path: not over-trust, not avoidance, but the human internalizing the system’s failure as personal error. The machine does not err visibly enough to trigger aversion. It simply absorbs the input without acknowledgment, and the human fills the interpretive gap by doubting himself.

Elish (2019) describes a related structural dynamic through the concept of the moral crumple zone: the human operator in a complex automated system absorbs the moral and legal consequences of system failures, regardless of how much actual control that operator exercised. In this case, the governor absorbs something different. Not blame after a visible failure, but self-doubt during an invisible one. The system does not crash. It simply fails to confirm that the governor’s authority was received.


Why It Happened

The root cause is not a bug in Claude or ChatGPT. Both platforms can distinguish human input from AI output when instructed to do so. They are fully capable.

The root cause is that the ingestion protocol had no source-authority discrimination mechanism. “Respond ‘got it’ to each paste until ‘done'” creates a uniform processing queue. Every input gets identical treatment. The protocol never asks: who submitted this? It only asks: has the “done” signal arrived?

That makes the failure architectural, not vendor-specific. The same protocol instruction ran in both environments. The same failure appeared in both. Any AI platform operating under the same uniform-ingestion protocol would produce the same result.

The distinction matters. If this were a platform bug, the fix would be a vendor request. Because it is an architectural gap, the fix has to be structural: the protocol itself must distinguish source authority at the moment of ingestion, not during synthesis.


The Fix

The corrected protocol now operates differently. During multi-AI feedback ingestion, the system responds to each input with source-type confirmation:

For AI platform inputs: “got it” followed by the platform name.

For human arbiter inputs: “got it, human arbiter input, Tier 0” followed by explicit confirmation that the input will be weighted above all AI platform outputs in synthesis.

This forces the system to classify source authority at the point of entry rather than processing everything as an undifferentiated stream. The classification becomes part of the audit trail. If human input arrives and does not receive the Tier 0 confirmation, the operator knows immediately that the system has failed to recognize the authority distinction. The failure becomes visible at the moment it occurs rather than surfacing only through post-synthesis review.

The fix was implemented as a permanent protocol amendment on February 28, 2026, stored in the synthesis platform’s operational memory. It applies to all future multi-AI feedback cycles.


Why This Is Bigger Than One Practitioner

This case study is small. One practitioner, one protocol, two platforms, a handful of synthesis sessions. The governance architecture caught the failure eventually. The fix is already implemented.

But the pattern scales.

Every organization deploying multi-AI workflows faces the same architectural question: does the system structurally distinguish between human authority and AI output, or does it process both through the same pipeline? If the answer is “same pipeline,” then human-in-the-loop governance is operating on the honor system. The human occupies a position in the process. The system does not structurally recognize that position as authoritative. When volume increases, when time pressure mounts, when the feedback queue contains fifteen platform responses and one human observation, the human input becomes statistically insignificant. Not because anyone decided to ignore it. Because the architecture never required the system to notice it was different.

This is the gap between governance policy and governance architecture. Policy says the human has authority. Architecture determines whether the system can recognize that authority when the human exercises it.

The gap extends into existing governance frameworks. Checkpoint-Based Governance establishes that “AI cannot approve another AI” and requires human arbitration at structurally defined points. That constitutional rule held. The checkpoint existed. The human arrived at the checkpoint. But the system at the checkpoint could not distinguish the human from another AI input. Constitutional authority without source-authority recognition produces a governance system that is legally correct and operationally blind.

The HAIA-RECCLIN dissent preservation doctrine requires that minority positions be documented as governance artifacts rather than consensus-washed out of existence. Human arbiter input is, by definition, the ultimate minority position in a multi-AI feedback cycle: one human voice among seven or more AI platforms. The doctrine requires preservation. The protocol permitted deletion. The architecture violated its own governance principle not through a policy exception but through a structural omission.

And the broader argument in Governing AI: When Capability Exceeds Control holds: organizations showing systematic failure at manageable scales provide no evidence of capacity for governance when stakes escalate. If a governance framework designed by the person who built it, operated by the person who built it, and governed by the person who built it can still systematically drop human authority at the checkpoint, then enterprise governance systems operated by people who did not build the framework face exposure at every junction where human input enters an AI processing pipeline.


The Governor Must Be Qualified, But the System Must Be Built to Let Them Govern

Governance literature focuses heavily on the qualifications of the human in the loop. Can the operator understand the AI output? Does the operator have domain expertise? Is the operator trained to recognize hallucination, bias, and error? These questions matter. Qualified governors produce better oversight outcomes than unqualified ones.

But qualification is only half the requirement. The other half is architectural. The system must be designed to receive governance input in a way that preserves its authority. If a qualified governor submits expert judgment and the system processes it as just another data stream, the qualification is wasted. The governor’s competence becomes irrelevant because the system’s architecture makes the governor’s input indistinguishable from any other input.

A recent Harvard Journal of Law and Technology analysis frames this as a negligence problem: placing a human in a loop without implementing structural human-systems integration frameworks, especially for high risk use cases, should constitute a failure of duty under the standard of care. Mere human presence is not a precaution. The system must be designed so the human’s role is structurally effective, not ceremonially present.

This is why the last gate matters more than any other checkpoint in the governance cycle. The last gate is where synthesis occurs. Where inputs become outputs. Where multiple perspectives collapse into recommendations. If the last gate does not structurally distinguish source authority, then every checkpoint upstream has been performative. The evidence arrived. The authority was exercised. And the system that was supposed to preserve both treated them as undifferentiated content in a processing queue.

The fix is not more qualified governors. The fix is systems built to recognize governance when it arrives.


What Needs to Change

For organizations deploying multi-AI workflows: Any protocol that aggregates inputs from multiple sources must include source-authority classification at the point of ingestion, not during synthesis or post-processing. The classification must be visible to the operator at the moment of entry, producing immediate confirmation that the system has recognized the input’s authority level.

For governance framework designers: “Human in the loop” requires architectural specification. Which loop? At which point? Through which interface? With what confirmation mechanism? The phrase alone provides no structural guarantee. Governance frameworks must specify not only that human authority exists but how the system identifies, preserves, and weights that authority when the human exercises it.

For AI platform providers: Multi-input processing protocols should default to source-authority classification rather than uniform ingestion. When a system receives inputs from multiple sources during a structured workflow, the system should confirm source type and apply appropriate evidentiary weighting. This is not a capability limitation. Both Claude and ChatGPT showed full capability to distinguish human from AI input once instructed to do so. The default processing mode simply does not make this distinction, and most operators will not think to request it until after the failure has occurred.

For policymakers: The EU AI Act Article 14 requires human oversight. The NIST AI RMF requires governance functions. These requirements should be extended to specify that human oversight mechanisms must include verifiable source-authority recognition, meaning the system must show, through audit trail evidence, that it recognized human input as human input and weighted it according to the governance hierarchy. Without this specification, compliance becomes a matter of process documentation rather than architectural verification.


Conclusion

The governance loop works when it can tell the difference between the governor and the governed. When it cannot, human authority becomes another signal in a pipeline optimized for throughput, not accountability. The governor stands at the checkpoint, submits judgment, and watches it disappear into the same queue as every other input. The system does not override the human. It does something worse. It makes the human’s authority indistinguishable from everything else in the stream.

This case was caught because the governor had enough operational experience to eventually recognize that something was missing rather than continuing to assume the error was personal. That recognition, the moment where the human stopped doubting himself and started questioning the system, is the actual governance event. Everything before that moment was the system operating without governance. Everything after it was the system operating under governance.

But catching the failure did not recover the evidence. The human’s checkpoint input, the original analysis submitted during the synthesis cycle, no longer exists. No audit trail preserved it. No ingestion log recorded it. The governor caught the breach. The architecture could not return what it lost. Governance that detects failure after evidence is destroyed is better than governance that never detects failure at all. It is not sufficient.

The question every organization must answer is not whether a human is in the loop. The question is whether the loop knows the human is there.


Postscript: The Thread That Documented the Failure No Longer Exists

On February 28, 2026, the operator returned to the conversation thread where the governance failure was originally discovered, where the structural fix was designed, where the protocol amendment was implemented, and where several dependent work products were developed. The thread, titled “LinkedIn algorithm updates and best practices for 2026,” was hosted within the same Claude project environment where the operator’s entire governance corpus resides.

The thread was unrecoverable through the platform’s search and retrieval tools.

Not archived in a visible location. Not collapsed or hidden behind a filter. Unrecoverable. A search of the platform’s conversation history returned a single fragment: a brief exchange about a platform count correction in a LinkedIn post. The rest of the thread, the original discovery of the human arbiter input failure, the diagnostic analysis, the protocol redesign, the HAIA-SMART v1.8 revision, everything developed during that session, did not appear in search results. The platform’s own retrieval tools could not locate the work. Whether the thread was deleted, un-indexed, subject to a retention policy change, or lost through an interface update cannot be determined from the operator’s position. The root cause is unknown. The operational outcome is the same: the governance audit trail is inaccessible.

Two artifacts survived. A screenshot of the thread showing the moment Claude identified the systemic input-dropping pattern and formulated the corrective protocol. And this document, which had been exported as a markdown file before the thread disappeared. The memory edit implementing the Tier 0 human arbiter distinction also survived because it was stored in the platform’s persistent memory system, not in the conversation history. Everything else developed in that thread is unrecoverable from the platform.

This was not the failure the paper anticipated.

The original case documented a governance architecture failure at the protocol layer: human input entered a processing pipeline that could not distinguish it from AI output. The structural fix addressed that specific failure by requiring source-authority classification at ingestion. That fix holds. The memory edit persists. The protocol amendment operates correctly in subsequent sessions.

But the conversation thread where all of that happened, where the failure was diagnosed in real time, where the operator and the platform collaboratively identified and resolved the architectural gap, that thread was the primary audit trail for the governance event itself. And that audit trail now exists only in the operator’s local files and in whatever fragments the platform’s search index retained before the thread disappeared.

This is the second-order governance problem that no current AI platform addresses. The first-order problem is whether the system can distinguish human authority from AI output during processing. The second-order problem is whether the system preserves the record of governance events with the same integrity it applies to the content those events govern.

The answer, documented by this case, is no. The platform that processed the governance failure, that identified the root cause in its own protocol design, that implemented the structural fix, that produced the corrective documentation, could not preserve the conversation where all of that work occurred. The governance event survived because the operator exported deliverables independently. The decision trail, the diagnostic reasoning, the intermediate states between discovery and resolution, did not survive.

For a solo practitioner operating a personal governance framework, this is recoverable. The operator holds local copies. The memory edit persists. The published article preserves the finding. The work can be reconstructed, at cost, from surviving artifacts.

For an enterprise operating multi-AI governance at scale, this failure mode is not recoverable. Decision trails that vanish from the platform’s own history cannot be audited, cannot be replayed, and cannot satisfy regulatory requirements for documented human oversight. An organization relying on AI platform conversation history as its governance audit trail is building compliance documentation on infrastructure that can lose the documentation without notice, without explanation, and without recovery options.

The original paper asked whether the loop knows the human is there. This postscript asks the next question: does the loop remember that the human was there? If the platform cannot preserve the record of governance events occurring within its own environment, then the audit trail requirement of Checkpoint-Based Governance, the EU AI Act Article 12 record-keeping obligation, and every enterprise compliance framework that depends on documented decision lineage faces a failure the organization cannot detect until the record is needed and the record is not there.

The recursion is now complete. The governance architecture that failed to preserve human authority during processing has now failed to preserve the record of its own failure. The document you are reading exists because the operator did not trust the platform to retain it. That instinct, the governor’s decision to export rather than rely on platform persistence, is itself a governance act. It should not have been necessary. It was.


FAQ: TL;DR

What is the governance failure documented in this case study? A multi-AI synthesis protocol processed human arbiter input identically to AI platform output, dropping or absorbing the human checkpoint data without preserving its authority. The failure occurred across two independent platforms over months because the ingestion protocol had no source-authority discrimination mechanism.

What is reverse automation bias? Reverse automation bias occurs when a human under-trusts their own input because the system fails to acknowledge it. Instead of over-relying on the machine, the human assumes the error is personal when the system does not confirm receipt of human authority. The concept connects to established research on automation misuse and disuse by Parasuraman and Riley (1997) and algorithm aversion by Dietvorst, Simmons, and Massey (2015).

What is source-authority classification at ingestion? Source-authority classification at ingestion means the system identifies and confirms the type and authority level of each input at the moment it enters the processing pipeline, not during synthesis. For AI platform inputs the system responds with the platform name. For human arbiter inputs the system responds with Tier 0 confirmation and explicit acknowledgment that the input will be weighted above all AI outputs.

How does this relate to the EU AI Act? The EU AI Act Article 14 requires human oversight for high-risk AI systems and Article 12 requires record-keeping. This case study shows that compliance requires more than human presence. The system must structurally recognize human authority when exercised and preserve audit trail evidence that it did so. Without source-authority recognition, compliance becomes process documentation rather than architectural verification.

What is Checkpoint-Based Governance? Checkpoint-Based Governance is a constitutional framework for human-AI collaboration requiring that no AI system may finalize or approve another AI decision without human arbitration. Human override authority is absolute and requires no justification to the machines. CBG defines a four-stage decision loop: AI contribution, checkpoint evaluation, human arbitration, and decision logging.

What is HAIA-RECCLIN? HAIA-RECCLIN is a multi-AI collaboration framework with seven defined functional roles: Researcher, Editor, Coder, Calculator, Liaison, Ideator, and Navigator. It governs how multiple AI platforms work together under human authority using Checkpoint-Based Governance. The framework has been tested across eleven AI platforms and published in academic working papers, Congressional policy packages, and a book.

Why did the human input disappear during synthesis? The ingestion protocol instruction was to respond got it to each paste until done then synthesize. This created a uniform processing queue where every input received identical treatment. The protocol never asked who submitted this, only whether the done signal arrived. Because no platform in the synthesis list was called Human, the input was absorbed without attribution or dropped entirely.

Does this failure apply to enterprise AI deployments? Yes. Any organization deploying multi-AI workflows faces the same architectural question: does the system structurally distinguish between human authority and AI output, or does it process both through the same pipeline. If the answer is same pipeline, human-in-the-loop governance operates on the honor system. The pattern scales because the failure is architectural, not vendor-specific.

What is a moral crumple zone and how does it relate to this case? Elish (2019) describes a moral crumple zone as a dynamic where the human operator absorbs moral and legal consequences of system failures regardless of actual control. This case documents a related pattern: the governor absorbs self-doubt during an invisible failure rather than blame after a visible one. The system does not crash. It simply fails to confirm that the governor authority was received.

What happened to the audit trail of the governance failure? The conversation thread where the governance failure was diagnosed, the fix was designed, and the protocol amendment was implemented became unrecoverable through the platform search and retrieval tools. The governance audit trail disappeared. This shows that platforms may not preserve governance event records with the same integrity applied to the content those events govern.


Sources

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126. https://pubmed.ncbi.nlm.nih.gov/25401381/

Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society, 5, 40-60. https://doi.org/10.17351/ests2019.260

European Data Protection Supervisor. (2025, September 23). TechDispatch #2/2025: Human oversight of automated decision-making. EDPS Office. https://www.edps.europa.eu/data-protection/our-work/publications/techdispatch/2025-09-23-techdispatch-22025-human-oversight-automated-making

European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act). Articles 12, 14. https://eur-lex.europa.eu/legal-content/EN-DE/ALL/?from=EN&uri=CELEX%3A32024R1689

Harvard Journal of Law and Technology. (2026, February 9). Redefining the standard of human oversight for AI negligence. Harvard Journal of Law & Technology Digest. https://jolt.law.harvard.edu/digest/redefining-the-standard-of-human-oversight-for-ai-negligence

National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). Core Functions: Govern, Map, Measure, Manage. https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf

Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230-253. https://doi.org/10.1518/001872097778543886

Puglisi, B. C. (2025). Governing AI: When Capability Exceeds Control. ISBN 9798349677687.

Puglisi, B. C. (2026). Checkpoint-Based Governance: A Constitution for Human-AI Collaboration (v4.2.1). Digital Ethos. https://basilpuglisi.com

Puglisi, B. C. (2026). HAIA-RECCLIN Multi-AI Framework Updated for 2026. Digital Ethos. https://basilpuglisi.com

Puglisi, B. C. (2026). HAIA-RECCLIN: Agent Governance Architecture for Audit-Grade Multi-AI Collaboration. Academic Working Paper, EU Regulatory Compliance Edition. https://basilpuglisi.com

Skitka, L. J., Mosier, K. L., & Burdick, M. (1999). Does automation bias decision-making? International Journal of Human-Computer Studies, 51(5), 991-1006. https://doi.org/10.1006/ijhc.1999.0252

Share this:

  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Facebook (Opens in new window) Facebook
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Code & Technical Builds, Design, Thought Leadership, White Papers, Workflow

Reader Interactions

Leave a Reply Cancel reply

You must be logged in to post a comment.

Primary Sidebar

Buy the eBook on Amazon

FREE WHITE PAPER, MULTI-AI

A comprehensive multi AI governance framework that establishes human authority, checkpoint oversight, measurable intelligence scoring, and operational guidance for responsible AI collaboration at scale.

SAVE 25% on Governing AI, get it Publisher Direct

Save 25% on Digital Factics X, Publisher Direct

Digital Factics X

For Small Business

Facebook Groups: Build a Local Community Following Without Advertising Spend

Turn Google Reviews Smarter to Win New Customers

Save Time with AI: Let It Write Your FAQ Page Draft

Let AI Handle Your Google Profile Updates

How to Send One Customer Email That Doesn’t Get Ignored

Keep Your Google Listing Safe from Sneaky Changes

#SMAC #SocialMediaWeek

Basil Social Media Week

Legacy Print:

Digital Factics: Twitter

Digital Ethos Holiday Networking

Basil Speaking for Digital Ethos
RSS Search

@BasilPuglisi Copyright 2008, Factics™ BasilPuglisi.com, Content & Strategy, Powered by Factics & AI,

%d