When a peer asked why my work matters, I decided to run a comparative analysis. Five independent systems, ChatGPT (HAIA RECCLIN), Gemini, Claude, Perplexity, and Grok, compared my work to 22 influential voices across AI ethics, governance, adoption, and human AI collaboration. What emerged was not a verdict but a lens, a way of seeing where my work overlaps with established thinking and where it adds a distinctive configuration.

Why I Did This
I started blogging in 2009. By late 2010, I began adding source lists at the end of my posts so readers could see what I learned and know that my writing was grounded in applied knowledge, not just opinion.
By 2012, after dozens of events and collaborations, I introduced Teachers NOT Speakers to turn events into classrooms where questions and debate drove learning.
In November 2012, I launched Digital Factics: Twitter Mag Cloud, building on the Factics concept I had already applied in my blogs. In 2013, we used it live in events so participants could walk away with strategy, not just inspiration.
By 2025, I had shifted my focus to closing the gap between principles and practice. Asking the same question to different models revealed not just different answers but different assumptions. That insight became HAIA RECCLIN, my multi AI orchestration model that preserves dissent and uses a human arbiter to find convergence without losing nuance.
This analysis is not about claiming victory. It is a compass and a mirror, a way to see where I am strong, where I may still be weak, and how my work can evolve.
The Setup
This was a comparative positioning exercise rather than a formal validation. HAIA RECCLIN runs multiple AIs independently and preserves dissent to avoid single model bias. I curated a 22 person panel covering ethics, governance, adoption, and collaboration so the comparison would test my work against a broad spectrum of current thought. Other practitioners might choose different leaders or weight domains differently.
How I Ran the Comparative Analysis
- Prompt Design: A single neutral prompt asked each AI to compare my framework and style to the panel, including strengths and weaknesses.
- Independent Runs: ChatGPT, Gemini, Claude, Perplexity, and Grok were queried separately.
- Compilation: ChatGPT compiled the responses into a single summary with no human edits, preserving any dissent or divergence.
- Bias Acknowledgement: AI systems often show model helpfulness bias, favoring constructive and positive framing unless explicitly challenged to find flaws.
The Results
The AI responses converged around themes of operational governance, cultural adoption, and human AI collaboration. This convergence is encouraging, though it may reflect how I framed the comparison rather than an objective measurement. These are AI-generated impressions and should be treated as inputs for reflection, not final judgments.
Comparative Findings
These are AI generated comparative impressions for reflection, not objective measurements.
Theme | Where I Converge | Where I Extend | Potential Weaknesses |
---|---|---|---|
AI Ethics | Fairness, transparency, oversight | Constitutional checks and balances with amendment pathways NIST RMF | No formal external audit or safety benchmark |
Human AI Collaboration | Human in the loop | Multi AI orchestration and human arbitration Mollick 2024 | Needs metrics for “dissent preserved” |
AI Adoption | Scaling pilots, productivity | 90 day growth rhythm and culture as multiplier Brynjolfsson and McAfee | Requires real world case studies and benchmarks |
Governance | Regulation and audits | Escalation maps, audit trails, and buy in NIST AI 100-2 | Conceptual alignment only, not certified |
Narrative Style | Academic clarity | Decision maker focus with integrated KPIs | Risk of self selection bias |
What This Exercise Cannot Tell Us
This exercise cannot tell us whether HAIA RECCLIN meets formal safety standards, passes adversarial red-team tests, or produces statistically significant business outcomes. It cannot fully account for model bias, since all five AIs share overlapping training data. It cannot substitute for diverse human review panels, real-world pilots, or longitudinal studies.
The next step is to use adversarial prompts to deliberately probe for weaknesses, run controlled pilots where possible, and invite others to replicate this approach with their own work.
Closing Thought
This process helped me see where my work stands and where it needs to grow. Treat exercises like this as a compass and a mirror. When we share results and iterate together, we build faster, earn more trust, and improve the field for everyone.
If you try this yourself, share what you learn, how you did it, and where your work stood out or fell short. Post it, tag me, or send me your findings. I will feature selected results in a future follow up so we can all learn together.
Methodology Disclosure
Prompt Used:
“The original prompt asked each AI to compare my frameworks and narrative approach to a curated panel of 22 thought leaders in AI ethics, governance, adoption, and collaboration. It instructed them to identify similarities, differences, and unique contributions, and to surface both strengths and gaps, not just positive reinforcement.”
Source Material Provided:
To ground the analysis, I provided each AI with a set of my own published and unpublished works, including:
- AI Ethics White Paper
- AI for Growth, Not Just Efficiency
- The Growth OS: Leading with AI Beyond Efficiency (Part 2)
- From Broadcasting to Belonging — Why Brands Must Compete With Everyone
- Scaling AI in Moderation: From Promise to Accountability
- The Human Advantage in AI: Factics, Not Fantasies
- AI Isn’t the Problem, People Are
- Platform Ecosystems and Plug-in Layers
- An unpublished 20 page white paper detailing the HAIA RECCLIN model and a case study
Each AI analyzed this material independently before generating their comparisons to the thought leader panel.
Access to Raw Outputs:
Full AI responses are available upon request to allow others to replicate or critique this approach.
References
- NIST AI Risk Management Framework (AI RMF 1.0), 2023
- NIST Generative AI Profile (AI 100-2), 2024–2025
- Anthropic: Constitutional AI: Harmlessness from AI Feedback, 2022
- Mitchell, M. et al. Model Cards for Model Reporting, 2019
- Mollick, E. Co-Intelligence, 2024
- Stanford HAI AI Index Report 2025
- Brynjolfsson, E., McAfee, A. The Second Machine Age, 2014