TL;DR

– AI mirrors human choices, not independent intelligence.
– Generalists and connectors benefit the most from AI.
– Specialists gain within their fields but lack the ability to cross silos or think outside the box.
– Inexperienced users risk harm because they cannot frame inputs or judge outputs.
– The resource effect may reshape socioeconomic structures, shifting leverage between degrees, knowledge, and access.
– The Factics framework proves it: facts only matter when tactics grounded in human judgment give them purpose.
AI as a Mirror of Human Judgment
Artificial intelligence is not alive and not sentient, yet it already reshapes how people live, work, and interact. At scale it acts like a mirror, reflecting the values, choices, and blind spots of the humans who design and direct it [1]. That is why human experience matters as much as the technology itself.
I have published more than nine hundred blog posts under my direction, half original and half created with AI [2–4]. The archive is valuable not because of volume but because of judgment. AI drafted, but human experience directed, reviewed, and refined. Without that balance the output would have been noise. With it, the work became a record of strategy, growth, and experimentation.
Why Generalists Gain the Most
AI reduces the need for some forms of expertise but creates leverage for those who know how to direct it. Generalists—people with broad knowledge and the ability to connect dots across domains—benefit the most. They frame problems, translate insights across disciplines, and use AI to scale those ideas into action.
Specialists benefit as well, but only within the walls of their fields. Doctors, lawyers, and engineers can use AI to accelerate diagnosis, review documents, or test designs. Yet they remain limited when asked to apply knowledge outside their vertical. They do not cross silos easily, and AI alone cannot provide that translation. Generalists retain the edge because they can see across contexts and deploy AI as connective tissue.
At the other end of the spectrum, those with less education or experience often face the greatest danger. They lack the baseline to know what to ask, how to ask it, or how to evaluate the output. Without that guidance, AI produces answers that may appear convincing but are wrong or even harmful. This is not the fault of the machine—it reflects human misuse. A poorly designed prompt from an untrained user creates as much risk as a bad input into any system.
The Resource Effect
AI also raises questions about class and socioeconomic impact. Degrees and titles have long defined status, but knowledge and execution often live elsewhere. A lawyer may hold the degree, but it is the paralegal who researches case law and drafts the brief. In that example, the lawyer functions as the generalist, knowing what must be found, while the paralegal is the specialist applying narrow research skills. AI shifts that equation. If AI can surface precedent, analyze briefs, and draft arguments, which role is displaced first—the lawyer or the paralegal?
The same tension plays out in medicine. Doctors often hold the broad training and experience, while physician assistants and nurses specialize in application and patient management. AI can now support diagnostics, analyze records, and surface treatment options. Does that change the leverage of the doctor, or does it challenge the specialist roles around them? The answer may depend less on the degree and more on who knows how to direct AI effectively.
For small businesses and underfunded organizations, the resource effect becomes even sharper. Historically, capital determined scale. Well-funded companies could hire large staffs, while lean organizations operated at a disadvantage. AI shifts the baseline. An underfunded business with AI can now automate research, marketing, or operations in ways that once required teams of staff. If used well, this levels the playing field, allowing smaller organizations to compete with larger ones despite fewer resources. But if used poorly, it can magnify mistakes just as quickly as it multiplies strengths.
From Efficiency to Growth
The opportunity goes beyond efficiency. Efficiency is the baseline. The true prize is growth. Efficiency asks what can be automated. Growth asks what can be expanded. Efficiency delivers speed. Growth delivers resilience, scale, and compounding value. AI as a tool produces pilots and slides. AI as a system becomes a Growth Operating System, integrating people, data, and workflows into a rhythm that compounds [9].
This shift is already visible. In sales, AI compresses close rates. In marketing, it personalizes onboarding and predicts churn. In product development, it accelerates feedback loops that reduce risk and sharpen investment. Organizations that tie AI directly to outcomes like revenue per employee, customer lifetime value, and sales velocity outperform those that settle for incremental optimization [10, 11]. But success depends on the role of the human directing it. Generalists scale the most, specialists scale within their verticals, and those with little training put themselves and their organizations at risk.
Factics in Action
The Factics framework makes this practical. Facts generated by AI become useful only when paired with tactics shaped by human experience. AI can draft a pitch, but only human insight ensures it is on brand and audience specific. AI can flag churn risks, but only human empathy delivers the right timing so customers feel valued instead of targeted. AI can process research at scale, but only human judgment ensures ethical interpretation. In healthcare, AI may monitor patients, but clinicians interpret histories and symptoms to guide treatment [12]. In supply chains, AI can optimize logistics, but managers balance efficiency with safety and stability. The facts matter, but tactics give them purpose.
Adoption, Risks, and Governance
Adoption is not automatic. Many organizations rush into AI without asking if they are ready to direct it. Readiness does not come from owning the latest model. It comes from leadership experience, review loops, and accountability systems. Warning signs include blind reliance on automation, lack of review, and executives treating AI as replacement rather than augmentation. Healthy systems look different. Prompts are designed with expertise, outputs reviewed with judgment, and cultures embrace transformation. That is what role transformation looks like. AI absorbs repetitive tasks while humans step into higher value work, creating growth loops that compound [13].
Risks remain. AI can replicate bias, displace workers, or erode trust if oversight is missing. We have already seen hiring algorithms that screen out qualified candidates because training data skewed toward a narrow profile. Facial recognition systems have misidentified individuals at higher rates in minority populations. These failures did not come from AI alone but from humans who built, trained, and deployed it without accountability. The fear does not come from machines, it comes from us. Ethical risk management must be built into the system. Governance frameworks, cultural safeguards, and human review are not optional, they are the prerequisites for trust [14, 15].
Why AGI Remains Out of Reach
This also grounds the debate about AGI and ASI. Today’s systems remain narrow AI, designed for specific tasks like drafting text or processing data. AGI imagines cross-domain adaptation. ASI imagines surpassing human capability. Without creativity, emotion, or imagination, such systems may never cross that line. These are not accessories to intelligence, they are its foundation [5]. Pattern recognition may detect an upset customer, but emotional intelligence knows whether they need an apology, a refund, or simply to be heard. Without that capacity, so called “super” intelligence remains bounded computation, faster but not wiser [6].
Artificial General Intelligence is not something that exists publicly today, nor can it be demonstrated in any credible research. Simulation is not the same as possession. ASI, artificial super intelligence, will remain out of reach because emotion, creativity, and imagination are human—not computational—elements. For my fellow Trekkies, even Star Trek made the point: Data was the most advanced vision of AI, yet his pursuit of humanity proved that emotion and imagination could never be programmed.
Closing Thought
The real risk is not runaway machines but humans deploying AI without guidance, review, or accountability. The opportunity is here, in how businesses use AI responsibly today. Paired with experience, AI builds systems that drive growth with integrity [8].
AI does not replace the human experience. Directed with clarity and purpose, it becomes a foundation for growth. Factics proves the point. Facts from AI only matter when coupled with tactics grounded in human judgment. The future belongs to organizations that understand this rhythm and choose to lead with it.
Disclosure
This article is AI-assisted but human-directed. My original position stands: AI is not alive or sentient, it mirrors human judgment and blind spots. From my Ethics of AI work, I argue the risks come not from machines but from humans who design and deploy them without accountability. In The Growth OS series, I extend this to show that AI is not just efficiency but a system for growth when paired with oversight and experience. The first drafts here came from my own qualitative and quantitative experience. Sources were added afterward, as research to verify and support those insights. Five AI platforms—GPT-5, Claude, Gemini, Perplexity, and Grok—assisted in drafting and validation, but the synthesis, review, and final voice remain mine. The Factics framework guides it: facts from AI only matter when tactics grounded in human judgment give them purpose.

References
[1] Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114–123. https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces
[2] Puglisi, B. (2025, August 18). Ethics of artificial intelligence. BasilPuglisi.com. https://basilpuglisi.com/ethics-of-artificial-intelligence/
[3] Puglisi, B. (2025, August 29). The Growth OS: Leading with AI beyond efficiency. BasilPuglisi.com. https://basilpuglisi.com/the-growth-os-leading-with-ai-beyond-efficiency/
[4] Puglisi, B. (2025, September 4). The Growth OS: Leading with AI beyond efficiency Part 2. BasilPuglisi.com. https://basilpuglisi.com/the-growth-os-leading-with-ai-beyond-efficiency-part-2/
[5] Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce implications. Science, 358(6369), 1530–1534. https://doi.org/10.1126/science.aap8062
[6] Funke, F., et al. (2024). When combinations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour, 8, 1400–1412. https://doi.org/10.1038/s41562-024-02024-1
[7] Zhao, M., Simmons, R., & Admoni, H. (2022). The role of adaptation in collective human–AI teaming. Topics in Cognitive Science, 17(2), 291–323. https://doi.org/10.1111/tops.12633
[8] Bauer, A., et al. (2024). Explainable AI improves task performance in human–AI collaboration. Scientific Reports, 14, 28591. https://doi.org/10.1038/s41598-024-82501-9
[9] McKinsey & Company. (2025). Superagency in the workplace: Empowering people to unlock AI’s full potential at work. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work
[10] Sadiq, R. B., et al. (2021). Artificial intelligence maturity model: A systematic literature review. PeerJ Computer Science, 7, e661. https://doi.org/10.7717/peerj-cs.661
[11] van der Aalst, W. M. P., et al. (2024). Factors influencing readiness for artificial intelligence: A systematic review. AI Open, 5, 100051. https://doi.org/10.1016/j.aiopen.2024.100051
[12] Rao, S. S., & Bourne, L. (2025). AI expert system vs generative AI with LLM for diagnoses. JAMA Network Open, 8(5), e2834550. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2834550
[13] Ouali, I., et al. (2024). Exploring how AI adoption in the workplace affects employees: A bibliometric and systematic review. Frontiers in Artificial Intelligence, 7, 1473872. https://doi.org/10.3389/frai.2024.1473872
[14] Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
[15] NIST. (2023). AI risk management framework (AI RMF 1.0). U.S. Department of Commerce. https://doi.org/10.6028/NIST.AI.100-1
Leave a Reply
You must be logged in to post a comment.