• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Home
  • About Me
    • Teaching / Speaking / Events
  • Book: Governing AI
  • Book: Digital Factics X
  • AI – Artificial Intelligence
    • Ethics of AI Disclosure
  • AI Learning
    • AI Course Descriptions

@BasilPuglisi

Content & Strategy, Powered by Factics & AI, Since 2009

  • Headlines
  • My Story
    • Engagements & Moderating
  • AI Thought Leadership
  • Basil’s Brand Blog
  • Building Blocks by AI
  • Local Biz Tips

My Story: Why I Think This Way About AI

Someone asked me recently who I am to make claims about AI governance, to define terms, to publish books on the subject.

The question is familiar. It followed me onto Madison Avenue when I built websites for a startup and walked into rooms where the established players looked right through me. It found me again at Social Media Week in 2011 when I took a stage before anyone knew my name. It came back when I left digital media for a police academy and people who thought they knew me wondered what I was thinking. Now it follows the AI work.

I stopped minding the question years ago. The answer has never changed. I do not wait for permission to learn, and I do not stop asking questions just because the answers might unsettle the people who thought they had it figured out.

This is not a resume. This is the story of how I came to ask questions that most people skip over, and why I keep asking them.

The Question That Stayed

One question has followed me through every environment I have worked in: What do I not know?

I picked it up early and never put it down. Sociology taught me to look for structure, power, culture, and the consequences nobody planned for. Criminal Justice taught me that rules exist in tension with enforcement, that discretion carries weight, and that accountability lands on specific people whether they want it or not. Those two fields do not resolve into a clean synthesis. I stopped expecting them to. The tension trained me to distrust tidy answers and to ask instead who decides, under what authority, and with what consequence.

A Master of Public Administration added execution to the mix. Policy on paper means nothing until it survives contact with budgets, institutions, politics, and the people who have to carry it out. I learned that systems look elegant in proposals and messy in practice. That lesson never wore off.

Writing in Public

The academic work gave me the questions. What came next forced me to test them where results were visible and failure was public.

My first taste of building in public came on Madison Avenue, working for a startup, walking into rooms where nobody asked for my opinion until the work forced them to pay attention. The startup work taught me how to build, but I needed a place to think out loud where anyone could challenge me.

I started writing online in 2009, back when blogging still meant something and feedback arrived fast. Public writing teaches different lessons than academic writing. If the facts were weak, readers said so. If the argument did not hold, someone picked it apart in the comments. If a tactic produced no measurable result, that failure sat in the open for anyone to see.

That pressure built habits. I started pairing every fact with a tactic and a way to measure whether it worked. The approach did not have a name yet, but it would become Factics later. At the time it was just how I kept myself honest about what I actually knew versus what I assumed.

By 2011, that work put me on stage at Social Media Week. Nobody in the audience knew who I was. I delivered the talk anyway, and the work spoke for itself. By 2012 and 2013, I was in the middle of New York City’s digital media scene. Social Media Week named me a top influencer. I appeared on screens in Times Square, interviewed executives, experimented with Google Glass in public, and built an audience that expected me to know what I was talking about.

I remember standing in Times Square one week talking about the future of media, then sitting at my desk the next week writing another article, testing another claim, measuring another outcome. Visibility scaled fast. So did the responsibility that came with it. Every success showed me another gap in what I understood. The question never left: What do I not know about how these systems actually affect people?

I founded Digital Ethos as a nonprofit in 2011 because I wanted a structure for the teaching work. I joined the Social Media Club Global board because the community mattered. More than 900 articles later, the discipline holds. Claims get tested, tactics get measured, outcomes get documented, and if something fails it fails where people can see it.

The Turn Nobody Expected

What came next surprised almost everyone who knew me.

At the height of visible success in digital media, I entered the Port Authority Police Academy. No gap year. No slow transition. Six months of training that compressed everything I thought I knew about authority into a very small space.

I remember standing in recruit formation while instructors walked up the line. One of them held a copy of Newsday with my face on it, the same paper that had covered my tech work weeks earlier. I stood there in uniform with a shaved head while they looked at the photo of me wearing Google Glass.

The contrast hit hard. Visibility gave way to anonymity, influence gave way to hierarchy, and the systems I used to analyze from the outside now governed my daily life from the inside.

Law enforcement eliminated any remaining illusion that I understood how authority actually works. Information arrived incomplete, time compressed every decision, oversight followed every action, and accountability attached to my name, my shield number, my choices. Consequences could not be edited, optimized, or retracted after the fact.

The questions stayed the same but the weight changed. Who decides now meant authority backed by statute, under what authority meant policy, case law, and review boards, and with what consequence meant outcomes that followed people, including me, for years.

Over a decade with the PAPD taught me what no publication, no platform, and no classroom ever could. Authority without accountability is power without legitimacy. The burden of judgment cannot be delegated to a system. A human makes a decision. That human can be influenced for better or worse, but that human should also be held accountable. Edge cases are not data points to be optimized around. They are people. That lesson lives in my bones now.

When Fiction and Reality Touched

While I was living through the reality of authority and accountability in law enforcement, a television show was asking the same questions in fiction.

Person of Interest ran from 2011 to 2016. I watched it from inside the digital media world and engaged with it on social media like a lot of people did. The show stood out because it asked questions I was already sitting with: who decides, who authorizes, and what happens when a system knows more than the human responsible for acting on that knowledge.

The show did not shape my thinking at that point, it mirrored it.

Then something happened that I still have trouble explaining to people who were not there.

Months out of the academy, less than a year after engaging with Person of Interest online as a fan, I found myself on set during Season 4. Not as a fan. As a sworn police officer working the production at the Telehouse Teleport Data Center on Staten Island. A fictional surveillance system debated authority and restraint on screen while I stood there in uniform, freshly trained in the real limits, liabilities, and burdens of judgment that come with carrying a shield.

The realization arrived fast: the show was not imagining a future problem. It was describing a present one.

Harold Finch made immediate sense to me: extraordinary capability under deliberate constraint, knowledge separated from authority to act on it, uncertainty preserved so that humans stayed responsible for decisions the machine could not make for them.

Root made sense too, and that was the warning. Her certainty that the machine deserved trust, her belief that intelligence justified control, her confidence that good outcomes excused whatever process produced them. I had seen that mindset before, and it always breaks when consequences arrive that the confident person did not anticipate. Root is the reason HAIA-RECCLIN preserves dissent. Confidence without challenge is how systems fail.

That day on set did not create how I think. It sharpened what was already there.

And let me admit something: this was fun. Not the warning, but the experience itself. The opportunity to watch as the story shaped in front of me, to interact with the actors and hear them talk about their characters, to see the props, to engage with writers who built a world that asked the questions I was already living. This was an experience I gathered well before I earned it, and I knew that at the time.

October 23, 2014. Telehouse Teleport Data Center, Staten Island. Standing between Root and Harold in uniform. The show asked questions I was already living. This moment stays with me whenever someone asks why I care about AI governance.

The Frameworks

The questions from law enforcement and the warnings from Person of Interest stayed with me for years. AI re-entered my professional life later, not as something novel but as something inevitable. Systems now operated at the scale Person of Interest anticipated, and capability had outrun the structures meant to govern it.

Most responses I saw focused on ethics statements, safety principles, and internal review boards. I recognized immediately what was missing: those tools shape behavior but they do not assign authority, and they do not answer the question I kept coming back to. What do we not know yet, and who bears responsibility when that gap causes harm?

Factics gave me one piece: facts paired with tactics and measured outcomes keep claims honest. The question about unknowns gave me another piece, forcing restraint, preserved dissent, and human arbitration at decision points.

Together they became the foundation for HAIA-RECCLIN. I did not build that framework from confidence in AI capability. I built it from structured humility about what AI cannot do and what humans must not delegate. Multiple roles create coverage, preserved disagreement surfaces blind spots, defined checkpoints require human judgment, and authority stays traceable throughout.

The system exists to surface what I do not know, not to pretend the gaps are closed.

Checkpoint-Based Governance followed from the same logic and functions as architecture rather than aspiration. AI informs, humans decide, decisions get logged, authority stays traceable, and no system approves itself. Governance becomes the mechanism that acknowledges learning never stops, that capability always runs ahead of understanding, and that accountability requires structure to survive contact with pressure.

The Book

Governing AI: When Capability Exceeds Control started as documentation of what I was learning.

When I began working with AI seriously, I approached it the way I approach everything: test it, measure it, document what works and what fails, and keep asking what I do not understand. I started with one AI platform, then added another, then another, and I found that using multiple AIs in structured collaboration caught errors that single platforms missed. That multi-AI workflow evolved into HAIA-RECCLIN as I formalized the roles, the checkpoints, and the governance architecture.

Along the way, Geoffrey Hinton started issuing warnings about where AI capability was heading. His concerns matched what I was already seeing in my own work: capability expanding faster than the structures meant to govern it. The question I had been asking my entire career, what do I not know, became the central problem of the field.

The book documents that evolution. How I learned to work with AI. How multi-AI collaboration revealed blind spots that single systems hide. How HAIA-RECCLIN took shape through trial and revision. How the persistent question about unknowns became an operational framework rather than just a habit of mind.

The book exists because practice became written work. Research about Hinton’s warnings became blogs. Blogs became position papers. The entirety of the experience became a question: What do we have to fear, should we fear it, and what do we do about it? The book answers that question, then shows how I did it.

So Who Am I?

I am someone who has been answering this question for a long time, in different rooms, across different industries, and the answer has never depended on whether the people asking approved of me being there.

I am someone who has written in public for sixteen years and been wrong in public often enough to know the difference between confidence and certainty.

I am someone who stood on the set of Person of Interest in uniform and recognized the show was not just a work of fiction but a prescribed diagnostic for the future.

I am someone who built frameworks from the need to learn and solve my own needs, including the question I have never stopped asking: What do I not know?

That question is the foundation, the frameworks exist to keep asking it, and the book documents what the asking reveals.

I expect to keep learning until I cannot learn anymore. I will keep sharing it, whether paid to do so or not, because sharing what we learn and collaborating is how we advance. That is why I define, publish, and share.

If you want the artifacts that come out of this worldview, here is the governance library.

Read the book: Governing AI: When Capability Exceeds Control | Basil C. Puglisi, available through major retailers

Explore the white papers: HAIA-RECCLIN: The Multi-AI Governance Framework for Individuals, Businesses and Organizations

Checkpoint-Based Governance: A Constitution for Human-AI Collaboration, Version 4.2.1

Digital Factics: Twitter | MagCloud, first publication of Factics methodology

Basil C. Puglisi, MPA, operates as a Human-AI Collaboration Strategist and AI Governance Consultant through BasilPuglisi.com. He retired from the Port Authority Police Department on a Performance of Duty Disability after over a decade of service, founded Digital Ethos in 2011, and served on the Social Media Club Global board. Published works include Governing AI: When Capability Exceeds Control and the Digital Factics series.

Share this:

  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

Primary Sidebar

Buy the eBook on Amazon

FREE WHITE PAPER, MULTI-AI

A comprehensive multi AI governance framework that establishes human authority, checkpoint oversight, measurable intelligence scoring, and operational guidance for responsible AI collaboration at scale.

SAVE 25% on Governing AI, get it Publisher Direct

Save 25% on Digital Factics X, Publisher Direct

Digital Factics X

For Small Business

Facebook Groups: Build a Local Community Following Without Advertising Spend

Turn Google Reviews Smarter to Win New Customers

Save Time with AI: Let It Write Your FAQ Page Draft

Let AI Handle Your Google Profile Updates

How to Send One Customer Email That Doesn’t Get Ignored

Keep Your Google Listing Safe from Sneaky Changes

#SMAC #SocialMediaWeek

Basil Social Media Week

Legacy Print:

Digital Factics: Twitter

Digital Ethos Holiday Networking

Basil Speaking for Digital Ethos
RSS Search

@BasilPuglisi Copyright 2008, Factics™ BasilPuglisi.com, Content & Strategy, Powered by Factics & AI,

%d