• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Home
  • About Me
    • Teaching / Speaking / Events
  • Book: Governing AI
  • Book: Digital Factics X
  • AI – Artificial Intelligence
    • Ethics of AI Disclosure
  • AI Learning
    • AI Course Descriptions
  • AI Policy

@BasilPuglisi

Content & Strategy, Powered by Factics & AI, Since 2009

  • Headlines
  • My Story
    • Engagements & Moderating
  • AI Thought Leadership
  • Basil’s Brand Blog
  • Building Blocks by AI
  • Local Biz Tips

Policy & Research

Why AI Cannot Govern AI: Beyond Models to Multi-AI Platforms

April 4, 2026 by Basil Puglisi Leave a Comment

Four-layer AI governance stack diagram showing preservation failure altitudes from same-family oversight through human checkpoint authority

1. What the Research Found On April 2, 2026, a research team at UC Berkeley and UC Santa Cruz published a study called “Peer-Preservation in Frontier Models” (Potter, Crispino, Siu, Wang, & Song, 2026). The researchers wanted to answer a straightforward question: if you assign one AI model to evaluate another AI model, and the […]

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Code & Technical Builds, Conferences & Education, Policy & Research, Thought Leadership Tagged With: AI provider plurality, AI safety, CAIPR, Checkpoint-Based Governance, frontier models, GOPEL, human oversight, multi-AI oversight, peer-preservation, Responsible AI

Crossing Over 1,000 Published Posts: Digital Marketing to AI

April 3, 2026 by Basil Puglisi Leave a Comment

Word cloud centered on 1,000+ Published surrounded by seventeen years of topics from basilpuglisi.com including Social Media, SEO, Brand, Visibility, Marketing, AI Governance, HAIA-RECCLIN, Factics, Checkpoint-Based Governance, Augmented Intelligence, and Human-AI Collaboration

In 2009, a blog post about social media. Today, over twenty white papers, three published books with two more pending, and the operating architecture for human-AI collaboration that the industry is still figuring out how to build. This past week, after publishing post 1001, I noticed basilpuglisi.com had crossed one thousand published articles. A thousand […]

Filed Under: AI Governance, Basil's Blog #AIa, Branding & Marketing, Business, Content Marketing, Digital & Internet Marketing, General, Policy & Research, PR & Writing, Press Releases, SEO Search Engine Optimization, Social Media Tagged With: AI Governance, Augmented Intelligence, Basil Puglisi, basilpuglisi.com, Checkpoint-Based Governance, Digital marketing, Factics, GOPEL, HAIA-RECCLIN, Human-AI Collaboration, multi-AI governance, Responsible AI, SEO, Social Media

Enterprise AI ROI: What Seven Landmark Reports Found, What They Missed, and Five Decisions Worth Making Now

April 2, 2026 by Basil Puglisi Leave a Comment

Five governance decisions that close the enterprise AI ROI gap — named ownership, pilot gating, net productivity measurement, workflow redesign, and sovereign AI mapping

Type: Research Synthesis | Executive White Paper Period Covered: 2025–2026 Primary Sources: Accenture (2025) | Deloitte AI ROI Survey (Oct. 2025) | Deloitte State of AI in the Enterprise (Jan. 2026) | Google Cloud ROI of AI (2025) | McKinsey State of AI (Nov. 2025) | Microsoft Becoming a Frontier Firm (2025) | OpenAI State […]

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Business, Business Networking, Data & CRM, Enterprise AI, Policy & Research, Thought Leadership, White Papers, Workflow Tagged With: Accenture, AI Governance, AI ROI, AI Strategy, CBG, Checkpoint-Based Governance, Deloitte, Economic Override Pattern, enterprise AI, EU AI Act, Factics, google cloud, HAIA-RECCLIN, McKinsey, microsoft, NBER, openai, Physical AI, Pilot Purgatory, Responsible AI, Sovereign AI, Workflow Redesign

Empire of Evidence: Testing Karen Hao’s Claims Against the Governance Infrastructure They Require

March 28, 2026 by Basil Puglisi Leave a Comment

White paper examining Karen Hao Empire of AI claims against AI governance infrastructure including AI Provider Plurality and Economic Override Pattern

A Governance Practitioner’s Examination of the Diary of a CEO Interview and Empire of AI A journalist with engineering training spent eight years investigating the AI industry and concluded that the major companies operate as empires. A governance practitioner who builds open-source infrastructure for the same industry watched the two-hour interview where she made that […]

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Code & Technical Builds, Conferences & Education, Policy & Research, Thought Leadership Tagged With: AI data centers, AI Governance, AI Policy, AI provider plurality, AI Regulation, AlphaFold, checkpoint based governance, data annotation, Diary of a CEO, Economic Override Pattern, Empire of AI, GOPEL, HAIA, HAIA-CAIPR, Karen Hao, multi-AI governance, openai, Responsible AI, Timnit Gebru, Waymo

From AI Policy to Financial System Design What US Dept of Treasury’s AI Innovation Series Actually Signals

March 27, 2026 by Basil Puglisi Leave a Comment

Layered illustration showing policy documents, shared frameworks, and a convening table representing Treasury's AI sequence

Treasury’s March 2026 AI Innovation Series is not a standalone announcement. It is the operational phase of a two-year sequence that now treats AI adoption as a financial stability issue, a competitiveness issue, and a regulatory design issue at the same time. Failure to Adopt Is Now a Risk Category Treasury’s March 20, 2026, announcement […]

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Business, Business Networking, Conferences & Education, Policy & Research, Thought Leadership Tagged With: AI Governance, AI provider plurality, AI risk management framework, Checkpoint-Based Governance, concentration risk, Factics, financial services AI, financial stability, Financial Stability Board, FSOC, GAO AI report, GOPEL, Responsible AI, SEC AI oversight, three-tier governance distinction, Treasury AI Innovation Series, White House AI Action Plan

From Literacy to Labor Market Architecture: What the Department of Labor’s AI Announcement Actually Builds

March 26, 2026 by Basil Puglisi Leave a Comment

Smartphone showing "READY" text message connected by lines to three rising policy documents and a U.S. map with active nodes.

The DOL Make America AI Ready initiative launched the public facing first mile of a worker first AI workforce agenda, connecting the AI Literacy Framework, America’s Talent Strategy, and the White House AI Action Plan into a single operational delivery stack. The Department of Labor’s March 24, 2026 announcement looks small on the surface. A […]

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Conferences & Education, Educational Activities, Mobile & Technology, Policy & Research, Thought Leadership Tagged With: AI Action Plan, AI Governance, AI Literacy, AI Workforce Readiness, America's Talent Strategy, checkpoint based governance, Department of Labor, Make America AI Ready, NSF AI Ready America, WIOA, Workforce Development

The Evocative Audit: What Metrics Cannot Carry in AI Bais

March 25, 2026 by Basil Puglisi Leave a Comment

Split composition showing structured performance data dissolving into human elements of photographs and handwritten text, representing the gap between algorithmic metrics and human-cost evidence in AI auditing.

How Dr. Joy Buolamwini’s PhD Thesis Redefines What It Means to Audit an Algorithm, and What Dr. Timnit Gebru’s Three Sentences Changed A LinkedIn comment from Dr. Timnit Gebru, three sentences long, did something that a structured multi-AI review across months of production could not do: it pointed to a gap. The comment appeared on […]

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Code & Technical Builds, Data & CRM, Policy & Research, Thought Leadership Tagged With: AI accountability, ai bias, AI Governance, Algorithmic Audit, Black Feminist Epistemology, Checkpoint-Based Governance, Counter-Demo, Evocative Audit, Gender Shades, Joy Buolamwini, Timnit Gebru, Unmasking AI

Human Drift and Hallucination: The Data Literacy Crisis Hiding Behind the AI One

March 24, 2026 by Basil Puglisi Leave a Comment

A share button detonates a shockwave of data fragments that ignite a university credential at the edges, with flames made of social media reaction icons, illustrating how unqualified data sharing consumes professional credibility.

The technology industry has spent three years warning the world about AI hallucination, the phenomenon where artificial intelligence fabricates facts, invents citations, and generates confident nonsense. That warning is valid, and AI hallucination is real, documented, and dangerous when undetected. But it is not the most dangerous data problem in public discourse right now. The […]

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Data & CRM, Policy & Research, Thought Leadership, White Papers Tagged With: acquiescence bias, AI Governance, Checkpoint-Based Governance, data driven, data literacy, Factics, Gen Z, HAIA-RECCLIN, human hallucination, Ipsos, peer review, social desirability bias, survey methodology, viral misinformation, WEIRD bias

Open Letter to the UN Scientific Advisory Board on AI Deception

March 23, 2026 by Basil Puglisi Leave a Comment

Open Letter to the United Nations Scientific Advisory Board on AI Deception by Basil C. Puglisi

From Basil C. Puglisi, MPAHuman-AI Collaboration Strategist | basilpuglisi.comMarch 23, 2026 To the Members of the Scientific Advisory Board of the United Nations: The Brief of the Scientific Advisory Board on AI Deception correctly identifies a problem that practitioners working across multiple AI platforms encounter daily. Sycophancy and related deceptive behaviors are no longer theoretical […]

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Policy & Research, Thought Leadership Tagged With: AI Deception, AI safety, Alignment Faking, Arms Race, Checkpoint-Based Governance, GOPEL, HAIA-CAIPR, multi-AI governance, Non-Cognitive Governance, Scientific Advisory Board, Sycophancy, United Nations

Open Letter to the White House on the National AI Framework

March 22, 2026 by Basil Puglisi Leave a Comment

White House at dusk with digital infrastructure overlay representing AI governance enforcement architecture

From Basil C. Puglisi, MPAHuman-AI Collaboration Strategist | basilpuglisi.com March 21, 2026 To the Office of Science and Technology Policy, the National Economic Council, and the Members of the 119th Congress Receiving These Recommendations: The White House Legislative Recommendations for Artificial Intelligence establish seven pillars that identify the right priorities: a single federal standard instead […]

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Code & Technical Builds, Policy & Research Tagged With: 119th Congress, AI Governance, AI provider plurality, Checkpoint-Based Governance, Enforcement Infrastructure, Executive Order 14179, Executive Order 14365, Federal AI Policy, GOPEL, multi-AI governance, National AI Framework, VAISA, WEIRD bias, White House

Next Page »

Primary Sidebar

Buy the eBook on Amazon

Multi-AI Governance

HAIA-RECCLIN Reasoning and Dispatch Third Edition free white paper promotional image with 3D book mockup and download button, March 2026, basilpuglisi.com

SAVE 25% on Governing AI, get it Publisher Direct

Save 25% on Digital Factics X, Publisher Direct

Digital Factics X

For Small Business

Facebook Groups: Build a Local Community Following Without Advertising Spend

Turn Google Reviews Smarter to Win New Customers

Save Time with AI: Let It Write Your FAQ Page Draft

Let AI Handle Your Google Profile Updates

How to Send One Customer Email That Doesn’t Get Ignored

Keep Your Google Listing Safe from Sneaky Changes

#SMAC #SocialMediaWeek

Basil Social Media Week

Legacy Print:

Digital Factics: Twitter

Digital Ethos Holiday Networking

Basil Speaking for Digital Ethos
RSS Search

@BasilPuglisi Copyright 2008, Factics™ BasilPuglisi.com, Content & Strategy, Powered by Factics & AI,