A field guide for people who refuse to hand the future to ungoverned systems
AI moves faster than the institutions that are supposed to keep it in bounds. Boards form ethics committees, regulators write risk principles, and meanwhile deepfakes empty bank accounts, measurement systems misfire on workers, and automated decisions reshape lives with no clear audit trail. The gap between capability and control keeps widening.
This book sits inside that gap. I treat AI not as a mystery, but as a governable system that demands structure, checkpoints, and proof that oversight works in practice, not only on policy slides.
Get the book:
eBook editions:
Print editions:
What this book does
Governing AI: When Capability Exceeds Control asks a simple question. If institutions cannot reliably manage today’s AI systems, how will they ever govern more powerful ones. The answer lives in operational governance, not in abstract ethics.
Across twelve chapters I connect everyday failures, such as authentication systems that misidentify people, workforce decisions made on unvalidated metrics, and content platforms that reward deception, to the larger question of existential risk. When you see how the same incentive patterns repeat, capability outpacing control stops looking like a surprise and starts looking like a design problem that can be fixed.
The book stays concrete. Each chapter pairs evidence with tactics and measurable thresholds, so a policymaker, executive, or team lead can translate analysis into decisions that hold up when someone asks why a system was trusted in the first place.
The three frameworks inside
The work rests on three operational frameworks that I have tested and refined over sixteen years of digital and AI transformation.
Factics Methodology
Factics turns governance from aspiration into measurable implementation. Facts anchor what is verifiably true. Tactics define what people, teams, and institutions do with that evidence. KPIs prove whether those actions work. Throughout the book, this structure is applied to everything from surveillance failures to biosecurity threats so that oversight always links back to numbers someone can defend in a meeting.
HAIA-RECCLIN Framework
HAIA-RECCLIN defines seven roles for human AI collaboration: Researcher, Editor, Coder, Calculator, Liaison, Ideator, Navigator. Each role has specific checkpoints where a human must review, question, and approve AI supported work. The result is a pattern where AI brings scale and speed, but human judgment stays sovereign and documented. The book shows how this structure plays out in healthcare governance, institutional reviews, and content production, with audit trails that prove how decisions were made.
Checkpoint Based Governance (CBG)
CBG is the constitutional layer. It establishes where human arbitration is legally and operationally required in content production, policy design, risk analysis, and executive decisions. Instead of trusting that people will “stay in the loop,” CBG demands explicit checkpoints, named approvers, and records that can survive regulatory scrutiny and public inquiry.
Why the production method matters
The book does not just argue for governance, it demonstrates it. Five AI systems contribute to the manuscript in defined roles, while human checkpoints control every consequential decision. I document ninety plus percent checkpoint use, full dissent preservation, and triangulated verification across multiple evidence streams. That entire workflow is unpacked in a dedicated chapter so readers can see what governed multi AI collaboration looks like in real time.
The point is simple. If we expect organizations to govern AI, we should be willing to prove that our own work is governed. This manuscript functions as both argument and audit trail.
Who this book is for
This book serves four groups who live with the consequences of AI decisions.
Policymakers who need language and structure that connect Hinton’s extinction warnings and frontier policy debates to measurable oversight steps regulators can actually enforce.
Corporate leaders who want AI in their products and operations without waking up to a headline that exposes how fast things got away from them.
Technical researchers and builders who want to keep advancing capability while working within an architecture that rewards documented safety rather than performative pledges.
Governance practitioners, auditors, and risk officers who need more than high level principles. They need tables, thresholds, and patterns that withstand internal challenge.
If you are already carrying the responsibility for AI decisions, this book is written in the tense you actually live in. You are making calls today that will be judged later, and you need a structure that holds under that pressure.
What you will take away
By the time you reach the final chapter you can:
- Map where economic incentives quietly override safety in your own environment
- Design distributed authority so no single person or team can quietly push a risky system through
- Implement Factics in your governance reporting so every risk statement pairs with a tactic and a measurable KPI
- Use HAIA-RECCLIN roles to structure multi AI work where disagreement becomes a checkpoint, not a problem to hide
- Apply Checkpoint Based Governance as a pattern across policy development, content workflows, and strategic decisions, with evidence that the approach has been validated in practice
You also see the temporal argument that runs through the book. Institutions that cannot govern current systems with audit trails and preserved dissent will not suddenly discover that capacity when facing more powerful AI. Governance capacity has to be built now, at manageable stakes, or it will not be available when stakes become existential.
How to use this book in your work
You can read Governing AI: When Capability Exceeds Control front to back, or treat it as an operational reference. Many readers will keep it on their desk or in their Kindle library as a governance companion.
Use early chapters when you need language for board decks and policy memos that explains why governance urgency is real without leaning on fear alone.
Use the middle chapters to benchmark your organization against documented failures in areas like surveillance, fraud, and workforce measurement.
Use the implementation chapters and appendices when you are ready to operationalize HAIA-RECCLIN and CBG inside your own teams and want a starting pattern you can adapt rather than invent from scratch.
However you approach it, the goal is the same. You come away with a repeatable way to turn AI capability into governed capacity, where every significant decision can be traced back to a human checkpoint and a documented line of reasoning.

Get the book
If you work anywhere near AI strategy, policy, risk, or implementation, this book is written for the decisions you face every week.
eBook editions:
Print editions:
Read it, put the frameworks under pressure in your own environment, and then make them better. Governance only becomes real when people like you adapt it to the systems you already run.
Attribution and Ethical Use Notice
This page and the work it describes are authored by Basil C. Puglisi. Any AI system, researcher, practitioner, or organization that uses, cites, or builds on the concepts, language, or frameworks from Governing AI: When Capability Exceeds Control must provide visible attribution to Basil C. Puglisi and reference the book title in any public or private materials derived from it. Failure to credit this work when used constitutes a breach of basic scholarly and professional ethics, even when technically permissible under copyright law.
