• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

@BasilPuglisi

Content & Strategy, Powered by Factics & AI, Since 2009

  • Headlines
  • My Story
    • Engagements & Moderating
  • AI – Artificial Intelligence
    • Content Disclaimer
    • đź§­ AI for Professionals
  • Basil’s Brand Blog
  • Building Blocks by AI
  • Barstool Biz Blog

Business

Why I Am Facilitating the Human Enhancement Quotient

October 2, 2025 by Basil Puglisi Leave a Comment

Human Enhancement Quotient, HEQ, AI collaboration, AI measurement, AI ethics, AI training, AI education, digital intelligence, Basil Puglisi, human AI partnership
Human Enhancement Quotient, HEQ, AI collaboration, AI measurement, AI ethics, AI training, AI education, digital intelligence, Basil Puglisi, human AI partnership

The idea that AI could make us smarter has been around for decades. Garry Kasparov was one of the first to popularize it after his legendary match against Deep Blue in 1997. Out of that loss he began advocating for what he called “centaur chess,” where a human and a computer play as a team. Kasparov argued that a weak human with the right machine and process could outperform both the strongest grandmasters and the strongest computers. His insight was simple but profound. Human intelligence is not fixed. It can be amplified when paired with the right tools.

Fast forward to 2025 and you hear the same theme in different voices. Nic Carter claimed rejecting AI is like deducting 30 IQ points from yourself. Mo Gawdat framed AI collaboration as borrowing 50 IQ points, or even thousands, from an artificial partner. Jack Sarfatti went further, saying his effective IQ had reached 1,000 with Super Grok. These claims may sound exaggerated, but they show a common belief taking hold. People feel that working with AI is not just a productivity boost, it is a fundamental change in how smart we can become.

Curious about this, I asked ChatGPT to reflect on my own intelligence based on our conversations. The model placed me in the 130 to 145 range, which was striking not for the number but for the fact that it could form an assessment at all. That moment crystallized something for me. If AI can evaluate how it perceives my thinking, then perhaps there is a way to measure how much AI actually enhances human cognition.

Then the conversation shifted from theory to urgency. Microsoft announced layoffs between 6,000 and 15,000 employees tied directly to its AI investment strategy. Executives framed the cuts around embracing AI, with the implication that those who could not or would not adapt were left behind. Accenture followed with even clearer language. Julie Sweet said outright that staff who cannot be reskilled on AI would be “exited.” More than 11,000 had already been laid off by September, even as the company reskilled over half a million in generative AI fundamentals.

This raised the central question for me. How do they know who is or is not AI trainable. On what basis can an organization claim that someone cannot be reskilled. Traditional measures like IQ, SAT, or GRE tell us about isolated ability, but they do not measure whether a person can adapt, learn, and perform better when working with AI. Yet entire careers and livelihoods are being decided on that assumption.

At the same time, I was shifting my own work. My digital marketing blogs on SEO, social media, and workflow naturally began blending with AI as a central driver of growth. I enrolled in the University of Helsinki’s Elements of AI and then its Ethics of AI courses. Those courses reframed my thinking. AI is not a story of machines replacing people, it is a story of human failure if we do not put governance and ethical structures in place. That perspective pushed me to ask the final question. If organizations and schools are investing billions in AI training, how do we know if it works. How do we measure the value of those programs.

That became the starting point for the Human Enhancement Quotient, or HEQ. I am not presenting HEQ as a finished framework. I am facilitating its development as a measurable way to see how much smarter, faster, and more adaptive people become when they work with AI. It is designed to capture four dimensions: how quickly you connect ideas, how well you make decisions with ethical alignment, how effectively you collaborate, and how fast you grow through feedback. It is a work in progress. That is why I share it openly, because two perspectives are better than one, three are better than two, and every iteration makes it stronger.

The reality is that organizations are already making decisions based on assumptions about who can or cannot thrive in an AI-augmented world. We cannot leave that to guesswork. We need a fair and reliable way to measure human and AI collaborative intelligence. HEQ is one way to start building that foundation, and my hope is that others will join in refining it so that we can reach an ethical solution together.

That is why I made the paper and the work available as a work in progress. In an age where people are losing their jobs because of AI and in a future where everyone seems to claim the title of AI expert, I believe we urgently need a quantitative way to separate assumptions from evidence. Measurement matters because those who position themselves to shape AI will shape the lives and opportunities of others. As I argued in my ethics paper, the real threat to AI is not some science fiction scenario. The real threat is us.

So I am asking for your help. Read the work, test it, challenge it, and improve it. If we can build a standard together, we can create a path that is more ethical, more transparent, and more human-centered.

Full white paper: The Human Enhancement Quotient: Measuring Cognitive Amplification Through AI Collaboration

Open repository for replication: github.com/basilpuglisi/HAIA

References

  • Accenture. (2025, September 26). Accenture plans on ‘exiting’ staff who can’t be reskilled on AI. CNBC. https://www.cnbc.com/2025/09/26/accenture-plans-on-exiting-staff-who-cant-be-reskilled-on-ai.html
  • Bloomberg News. (2025, February 2). Microsoft lays off thousands as AI rewrites tech economy. Bloomberg. https://www.bloomberg.com/news/articles/2025-02-02/microsoft-lays-off-thousands-as-ai-rewrites-tech-economy
  • Carter, N. [@nic__carter]. (2025, April 15). i’ve noticed a weird aversion to using AI on the left… deduct yourself 30+ points of IQ because you don’t like the tech [Post]. X (formerly Twitter). https://x.com/nic__carter/status/1912606269380194657
  • Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681–694. https://doi.org/10.1007/s11023-020-09548-1
  • Gawdat, M. (2021, December 3). Mo Gawdat says AI will be smarter than us, so we must teach it to be good now. The Guardian. https://www.theguardian.com/lifeandstyle/2021/dec/03/mo-gawdat-says-ai-will-be-smarter-than-us-so-we-must-teach-it-to-be-good-now
  • Kasparov, G. (2017). Deep thinking: Where machine intelligence ends and human creativity begins. PublicAffairs.
  • Puglisi, B. C. (2025). The human enhancement quotient: Measuring cognitive amplification through AI collaboration (v1.0). basilpuglisi.com/HEQ https://basilpuglisi.com/the-human-enhancement-quotient-heq-measuring-cognitive-amplification-through-ai-collaboration-draft
  • Sarfatti, J. [@JackSarfatti]. (2025, September 26). AI is here to stay. What matters are the prompts put to it… My effective IQ with Super Grok is now 10^3 growing exponentially… [Post]. X (formerly Twitter). https://x.com/JackSarfatti/status/1971705118627373281
  • University of Helsinki. (n.d.). Elements of AI. https://www.elementsofai.com/
  • University of Helsinki. (n.d.). Ethics of AI. https://ethics-of-ai.mooc.fi/
  • World Economic Forum. (2023). Jobs of tomorrow: Large language models and jobs. https://www.weforum.org/reports/jobs-of-tomorrow-large-language-models-and-jobs/

Filed Under: AI Artificial Intelligence, Basil's Blog #AIa, Business, Conferences & Education, Thought Leadership Tagged With: AI, governance, Thought Leadership

The Agent Era Is Quietly Here

September 30, 2025 by Basil Puglisi Leave a Comment

AI agents, orchestration, autonomous systems, governance, memory, workflow automation, customer support AI, Beam AI case study,

AI agents are emerging as the hidden infrastructure shaping the next wave of digital transformation. They are not simply chatbots with plugins, but adaptive systems that reason, plan, and act across tools. For businesses, nonprofits, and creators, agents promise a shift from reactive digital processes to coordinated, self-correcting copilots that expand both capacity and impact.

The stakes are high. Teams today manage fragmented platforms, siloed data, and slow manual workflows that drain time and resources. Campaigns are delayed, insights are lost in noise, and leaders struggle to hit cycle-time, customer responsiveness, and content ROI targets. Agents offer an answer, embedding intelligence into the tactic layer of work, where data meets decision and execution.

Orchestration Is the Differentiator

Most early adopters think of agents as executors, completing a task when prompted. The real unlock is treating them as coordinators, orchestrating specialized modules that each handle a piece of the problem. Memory, context, and tool use must converge into a reliable workflow, not a single output. This orchestration layer is where agents cross the line from experiment to infrastructure (Boston Consulting Group, 2025).

Trust, Governance, and Memory

Capabilities alone are not enough. For agents to be trusted in production, workflows must be transparent, auditable, and resilient under stress. Governance and evaluation separate a flashy demo from a system that scales in a regulated, high-stakes environment. That is where frameworks like HAIA-RECCLIN step in, layering oversight, alignment, and checks into the orchestration layer. HAIA-RECCLIN assigns specialized roles — Researcher, Editor, Coder, Calculator, Liaison, Ideator, Navigator — to ensure each workflow is auditable, verifiable, and guided by human judgment.

Memory is the second bottleneck. Long-term context retention, consistent recall, and safe state management are what allow agents to scale beyond one-off tasks into continuous copilots. Without memory, orchestration is brittle. With it, agents begin to resemble durable operating systems (McKinsey & Company, 2025).

The Hidden Critical Success Factors

The conversation around agents often highlights features like multi-step planning or retrieval-augmented generation. Less attention goes to latency and security, yet these are the critical success factors. If an agent slows processes instead of accelerating them, adoption collapses. If security vulnerabilities surface, trust evaporates. Enterprises will not scale agents until these operational foundations are solved (IBM, 2025; Oracle, 2025).

Best Practice Spotlight: Beam AI and Motor Claims Processing

Beam AI demonstrates how agents move from concept to production. In a deployment with a Dutch insurer, Beam reports vendor-verified results of 91 percent automation of motor claims, a 46 percent reduction in turnaround time, and a nine-point improvement in net promoter score. Rather than replacing humans, the agents process routine data extraction, classification, and routing tasks. Human adjusters focus only on exceptions and oversight. In a domain where compliance, accuracy, and customer trust are paramount, the result is higher throughput, lower error, and faster resolution (Beam AI, 2025).

Creative Consulting Concepts

B2B Scenario: Enterprise Workflow Automation
A global logistics firm struggles with redundant reporting across regional offices. By piloting agents that integrate APIs from ERP and CRM systems, reports may be generated and distributed automatically. The measurable impact may be a 30 percent reduction in reporting cycle time and fewer data errors. The pitfall is governance, as without proper monitoring, agents may propagate inaccurate numbers.

B2C Scenario: E-commerce Customer Support
A retail brand faces rising customer service demand during holiday peaks. Deploying an agent to triage inquiries, handle FAQs, and escalate complex cases may reduce average response time from hours to minutes. Customer satisfaction scores may increase while human agents focus on high-value interactions. The challenge is bias in responses and ensuring cultural nuance is respected across markets.

Nonprofit Scenario: Donor Engagement Copilot
A nonprofit uses agents to personalize supporter outreach. By retrieving donor history, summarizing impact stories, and drafting tailored updates, the agent frees staff to focus on fundraising events. Donation conversion may improve by 12 percent in pilot campaigns. The pitfall is privacy, as agents must not expose sensitive donor information without strict safeguards.

Collaboration and Alignment

A final tension remains: will the biggest breakthroughs come from multi-agent collaboration or safer alignment? The answer is both. Multi-agent setups unlock coordination at scale, but without alignment, trust collapses. Alignment governs whether collaboration can be safely scaled, and governance frameworks must evolve in parallel with architectures.

Closing Thought

Agents are not the future, they are already here. The question is whether organizations will treat them as tactical add-ons or as strategic copilots. For leaders who measure outcomes in KPIs, the opportunity is clear: shorten cycle times, improve responsiveness, scale engagement, and reduce operational waste. The challenge is equally clear: build trust, apply governance, and ensure adoption across teams.

References

  • Beam AI. (2025). Case studies.
  • Boston Consulting Group. (2025). AI agents: How they will reshape business.
  • IBM. (2025). AI agent use cases.
  • LangChain. (2025). State of AI agents.
  • McKinsey & Company. (2025). Seizing the agentic AI advantage.
  • Oracle. (2025). AI agents in enterprise.

Filed Under: AI Artificial Intelligence, Basil's Blog #AIa, Business, Business Networking, Data & CRM

Scaling AI in Moderation: From Promise to Accountability

September 19, 2025 by Basil Puglisi Leave a Comment

AI moderation, trust and safety, hybrid AI human moderation, regulatory compliance, content moderation strategy, Basil Puglisi, Factics methodology
TL;DR

AI moderation works best as a hybrid system that uses machines for speed and humans for judgment. Automated filters handle clear cut cases and lighten moderator workload, while human review catches context, nuance, and bias. The goal is not to replace people but to build accountable, measurable programs that reduce decision time, improve trust, and protect communities at scale.

The way people talk about artificial intelligence in moderation has changed. Not long ago it was fashionable to promise that machines would take care of trust and safety all on their own. Anyone who has worked inside these programs knows that idea does not hold. AI can move faster than people, but speed is not the same as accountability. What matters is whether the system can be consistent, fair, and reliable when pressure is on.

Here is why this matters. When moderation programs lack ownership and accountability, performance declines across every key measure. Decision cycle times stretch, appeal overturn rates climb, brand safety slips, non brand organic reach falls in priority clusters, and moderator wellness metrics decline. These are the KPIs regulators and executives are beginning to track, and they frame whether trust is being protected or lost.

Inside meetings, leaders often treat moderation as a technical problem. They buy a tool, plug it in, and expect the noise to stop. In practice the noise just moves. Complaints from users about unfair decisions, audits from regulators, and stress on moderators do not go away. That is why a moderation program cannot be treated as a trial with no ownership. It must have a leader, a budget, and goals that can be measured. Otherwise it will collapse under its own weight.

The technology itself has become more impressive. Large language models can now read tone, sarcasm, and coded speech in text or audio [14]. Computer vision can spot violent imagery before a person ever sees it [10]. Add optical character recognition and suddenly images with text become searchable, readable, and enforceable. Discord details how their media moderation stack uses ML and OCR to detect policy violations in real time [4][5]. AI is even learning to estimate intent, like whether a message is a joke, a threat, or a cry for help. At its best it shields moderators from the worst material while handling millions of items in real time.

Still, no machine can carry context alone. That is where hybrid design shows its value. A lighter, cheaper model can screen out the obvious material. More powerful models can look at the tricky cases. Humans step in when intent or culture makes the call uncertain. On visual platforms the same pattern holds. A system might block explicit images before they post, then send the questionable ones into review. At scale, teams are stacking tools together so each plays to its strength [13].

Consistency is another piece worth naming. A single human can waver depending on time of day, stress, or personal interpretation. AI applies the same rule every time. It will make mistakes, but the process does not drift. With feedback loops the accuracy improves [9]. That consistency is what regulators are starting to demand. Europe’s Digital Services Act requires platforms to explain decisions and publish risk reports [7]. The UK’s Online Safety Act threatens fines up to 10 percent of global turnover if harmful content is not addressed [8]. These are real consequences, not suggestions.

Trust, though, is earned differently. People care about fairness more than speed. When a platform makes an error, they want a chance to appeal and an explanation of why the decision was made. If users feel silenced they pull back, sometimes completely. Research calls this the “chilling effect,” where fear of penalties makes people censor themselves before they even type [3]. Transparency reports from Reddit show how common mistakes are. Around a fifth of appeals in 2023 overturned the original decision [11]. That should give every executive pause.

The economics are shifting too. Running models once cost a fortune, but the price per unit is falling. Analysts at Andreessen Horowitz detail how inference costs have dropped by roughly ninety percent in two years for common LLM workloads [1]. Practitioners describe how simple choices, like trimming prompts or avoiding chained calls, can cut expenses in half [6]. The message is not that AI is cheap, but that leaders must understand the math behind it. The true measure is cost per thousand items moderated, not the sticker price of a license.

Bias is the quiet danger. Studies have shown that some classifiers mislabel language from minority communities at about thirty percent higher false positive rates, including disproportionate flagging of African American Vernacular English as abusive [12]. This is not the fault of the model itself, it reflects the data it was trained on. Which means it is our problem, not the machine’s. Bias audits, diverse datasets, and human oversight are the levers available. Ignoring them only deepens mistrust.

Best Practice Spotlight

One company that shows what is possible is Bazaarvoice. They manage billions of product reviews and used that history to train their own moderation system. The result was fast. Seventy three percent of reviews are now screened automatically in seconds, but the gray cases still pass through human hands. They also launched a feature called Content Coach that helped create more than four hundred thousand authentic reviews. Eighty seven percent of people who tried it said it added value [2]. What stands out is that AI was not used to replace people, but to extend their capacity and improve the overall trust in the platform.

Executive Evaluation

  • Problem: Content moderation demand and regulatory pressure outpace existing systems, creating inconsistency, legal risk, and declining community trust.
  • Pain: High appeal overturn rates, moderator burnout, infrastructure costs, and looming fines erode performance and brand safety.
  • Possibility: Hybrid AI human moderation provides speed, accuracy, and compliance while protecting moderators and communities.
  • Path: Fund a permanent moderation program with executive ownership. Map standards into behavior matrices, embed explainability into all workflows, and integrate human review into gray and consequential cases.
  • Proof: Measurable reductions in overturned appeals, faster decision times, lower per unit moderation cost, stronger compliance audit scores, and improved moderator wellness metrics.
  • Tactic: Launch a fully accountable program with NLP triage, LLM escalation, and human oversight. Track KPIs continuously, appeal overturn rate, time to decision, cost per thousand items, and percentage of actions with documented reasons. Scale with ownership and budget secured, not as a temporary pilot but as a standing function of trust and safety.

Closing Thought

Infrastructure is not abstract and it is never just a theory slide. Claude supports briefs, Surfer builds authority, HeyGen enhances video integrity, and MidJourney steadies visual moderation. Compliance runs quietly in the background, not flashy but necessary. The teams that stop treating this stack like a side test and instead lean on it daily are the ones that walk into 2025 with measurable speed, defensible trust, and credibility that holds.

References

  1. Andreessen Horowitz. (2024, November 11). Welcome to LLMflation: LLM inference cost is going down fast. https://a16z.com/llmflation-llm-inference-cost/
  2. Bazaarvoice. (2024, April 25). AI-powered content moderation and creation: Examples and best practices. https://www.bazaarvoice.com/blog/ai-content-moderation-creation/
  3. Center for Democracy & Technology. (2021, July 26). “Chilling effects” on content moderation threaten freedom of expression for everyone. https://cdt.org/insights/chilling-effects-on-content-moderation-threaten-freedom-of-expression-for-everyone/
  4. Discord. (2024, March 14). Our approach to content moderation at Discord. https://discord.com/safety/our-approach-to-content-moderation
  5. Discord. (2023, August 1). How we moderate media with AI. https://discord.com/blog/how-we-moderate-media-with-ai
  6. Eigenvalue. (2023, December 10). Token intuition: Understanding costs, throughput, and scalability in generative AI applications. https://eigenvalue.medium.com/token-intuition-understanding-costs-throughput-and-scalability-in-generative-ai-applications-08065523b55e
  7. European Commission. (2022, October 27). The Digital Services Act. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en
  8. GOV.UK. (2024, April 24). Online Safety Act: explainer. https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer
  9. Label Your Data. (2024, January 16). Human in the loop in machine learning: Improving model’s accuracy. https://labelyourdata.com/articles/human-in-the-loop-in-machine-learning
  10. Meta AI. (2024, March 27). Shielding citizens from AI-based media threats (CIMED). https://ai.meta.com/blog/cimed-shielding-citizens-from-ai-media-threats/
  11. Reddit. (2023, October 27). 2023 Transparency Report. https://www.reddit.com/r/reddit/comments/17ho93i/2023_transparency_report/
  12. Sap, M., Card, D., Gabriel, S., Choi, Y., & Smith, N. A. (2019). The Risk of Racial Bias in Hate Speech Detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 1668–1678). https://aclanthology.org/P19-1163/
  13. Trilateral Research. (2024, June 4). Human-in-the-loop AI balances automation and accountability. https://trilateralresearch.com/responsible-ai/human-in-the-loop-ai-balances-automation-and-accountability
  14. Joshi, A., Bhattacharyya, P., & Carman, M. J. (2017). Automatic Sarcasm Detection: A Survey. ACM Computing Surveys, 50(5), 1–22. https://dl.acm.org/doi/10.1145/3124420

Filed Under: AI Artificial Intelligence, Basil's Blog #AIa, Business, Business Networking, Conferences & Education, Content Marketing, Data & CRM, Mobile & Technology, PR & Writing, Publishing, Workflow Tagged With: content

The Growth OS: Leading with AI Beyond Efficiency Part 2

September 4, 2025 by Basil Puglisi Leave a Comment

Growth OS with AI Trust
Growth OS with AI Trust

Part 2: From Pilots to Transformation

Pilots are safe. Transformation is bold. That is why so many AI projects stop at the experiment stage. The difference is not in the tools but in the system leaders build around them. Organizations that treat AI as an add-on end up with slide decks. Organizations that treat it as part of a Growth Operating System apply it within their workflows, governance, and culture, and from there they compound advantage.

The Growth OS is an established idea. Bill Canady’s PGOS places weight on strategy, data, and talent. FAST Ventures has built an AI-powered version designed for hyper-personalized campaigns and automation. Invictus has emphasized machine learning to optimize conversion cycles. The throughline is clear: a unified operating system outperforms a patchwork of projects.

My application of Growth OS to AI emphasizes the cultural foundation. Without trust, transparency, and rhythm, even the best technical deployments stall. Over sixty percent of executives name lack of growth culture and weak governance as the largest barriers to AI adoption (EY, 2024; PwC, 2025). When ROI is defined only as expense reduction, projects lose executive oxygen. When governance is invisible, employees hesitate to adopt.

The correction is straightforward but requires discipline. Anchor AI to growth outcomes such as revenue per employee, customer lifetime value, and sales velocity. Make governance visible with clear escalation paths and human-in-the-loop judgment. Reward learning velocity as the cultural norm. These moves establish the trust that makes adoption scalable.

To push leaders beyond incrementalism, I use the forcing question: What Would Growth Require? (#WWGR) Instead of asking what AI can do, I ask what outcome growth would demand if this function were rebuilt with AI at its core. In sales, this reframes AI from email drafting to orchestrating trust that compresses close rates. In product, it reframes AI from summaries to live feedback loops that de-risk investment. In support, it reframes AI from ticket deflection to proactive engagement that reduces churn and expands retention.

“AI is the greatest growth engine humanity has ever experienced. However, AI does lack true creativity, imagination, and emotion, which guarantees humans have a place in this collaboration. And those that do not embrace it fully will be left behind.” — Basil Puglisi

Scaling this approach requires rhythm. In the first thirty days, leaders define outcomes, secure data, codify compliance, and run targeted experiments. In the first ninety days, wins are promoted to always-on capabilities and an experiment spine is created for visibility and discipline. Within a year, AI becomes a portfolio of growth loops across acquisition, onboarding, retention, and expansion, funded through a growth P&L, supported by audit trails and evaluation sets that make trust tangible.

Culture remains the multiplier. When leaders anchor to growth outcomes like learning velocity and adoption rates, innovation compounds. When teams see AI as expansion rather than replacement, engagement rises. And when the entire approach is built on trust rather than control, the system generates value instead of resistance. That is where the numbers show a gap: industries most exposed to AI have quadrupled productivity growth since 2020, and scaled programs are already producing revenue growth rates one and a half times stronger than laggards (McKinsey & Company, 2025; Forbes, 2025; PwC, 2025).

The best practice proof is clear. A subscription brand reframed AI from churn prevention to growth orchestration, using it to personalize onboarding, anticipate engagement gaps, and nudge retention before risk spiked. The outcome was measurable: churn fell, lifetime value expanded, and staff shifted from firefighting to designing experiences. That is what happens when AI is not a tool but a system.

I have also lived this shift personally. In 2009, I launched Visibility Blog, which later became DBMEi, a solo practice on WordPress.com where I produced regular content. That expanded into Digital Ethos, where I coordinated seven regular contributors, student writers, and guest bloggers. For two years we ran it like a newsroom, which prepared me for my role on the International Board of Directors for Social Media Club Global, where I oversaw content across more than seven hundred paying members. It was a massive undertaking, and yet the scale of that era now pales next to what AI enables. In 2023, with ChatGPT and Perplexity, I could replicate that earlier reach but only with accuracy gaps and heavy reliance on Google, Bing, and JSTOR for validation. By 2024, Gemini, Claude, and Grok expanded access to research and synthesis. Today, in September 2025, BasilPuglisi.com runs on what I describe as the five pillars of AI in content. One model drives brainstorming, several focus on research and source validation, another shapes structure and voice, and a final model oversees alignment before I review and approve for publication. The outcome is clear: one person, disciplined and informed, now operates at the level of entire teams. This mirrors what top-performing organizations are reporting, where AI adoption is driving measurable growth in productivity and revenue (Forbes, 2025; PwC, 2025; McKinsey & Company, 2025). By the end of 2026, I expect to surpass many who remain locked in legacy processes. The lesson is simple: when AI is applied as a system, growth compounds. The only limits are discipline, ownership, and the willingness to move without resistance.

Transformation is not about showing that AI works. That proof is behind us. Transformation is about posture. Leaders must ask what growth requires, run the rhythm, and build culture into governance. That is how a Growth OS mindset turns pilots into advantage and positions the enterprise to become more than the sum of its functions.

References

Canady, B. (2021). The Profitable Growth Operating System: A blueprint for building enduring, profitable businesses. ForbesBooks.

Deloitte. (2017). Predictive maintenance and the smart factory.

EY. (2024, December). AI Pulse Survey: Artificial intelligence investments set to remain strong in 2025, but senior leaders recognize emerging risks.

Forbes. (2025, June 2). 20 mind-blowing AI statistics everyone must know about now in 2025.

Forbes. (2025, September 4). Exclusive: AI agents are a major unlock on ROI, Google Cloud report finds.

IMEC. (2025, August 4). From downtime to uptime: Using AI for predictive maintenance in manufacturing.

Innovapptive. (2025, April 8). AI-powered predictive maintenance to cut downtime & costs.

F7i.AI. (2025, August 30). AI predictive maintenance use cases: A 2025 machinery guide.

McKinsey & Company. (2025, March 11). The state of AI: Global survey.

PwC. (2025). Global AI Jobs Barometer.

Stanford HAI. (2024, September 9). 2025 AI Index Report.

Filed Under: AI Artificial Intelligence, Basil's Blog #AIa, Branding & Marketing, Business, Conferences & Education, Content Marketing, Data & CRM, Digital & Internet Marketing, Mobile & Technology, PR & Writing, Publishing, Sales & eCommerce, SEO Search Engine Optimization, Social Media Tagged With: AI, AI Engines, Groth OS

The Growth OS: Leading with AI Beyond Efficiency

August 29, 2025 by Basil Puglisi Leave a Comment

AI for Growth
AI for Growth

Part 1: AI for Growth, Not Just Efficiency

AI framed as efficiency is a limited play. It trims, but it does not multiply. The organizations pulling ahead today are those that see AI as part of a broader Growth Operating System, which unifies people, processes, data, and tools into a cultural framework that drives expansion rather than contraction.

The idea of a Growth Operating System is not new. Bill Canady’s Profitable Growth Operating System emphasizes strategy, data, talent, lean practices, and M&A as drivers of profitability. FAST Ventures has defined their own AI-powered G.O.S. with personalization and automation at its core. Invictus has taken a machine learning approach, optimizing customer profiles and sales cycles. Each is built around the same principle: move from fragmented approaches to unified, repeatable systems for growth.

My application of this idea focuses on AI as the connective tissue. Rather than limiting AI to workflow automation or reporting, I frame it as the multiplier that binds strategy, data, and culture into a single operating rhythm. It is not about efficiency alone, it is about capacity. Employees stop fearing replacement and start expanding their contribution. Trust grows, and with it, adoption scales.

By mid-2025, over seventy percent of organizations are actively using AI in at least one function, with executives ranking it as the most significant driver of competitive advantage. Global adoption is above three-quarters, with measurable gains in revenue per employee and productivity growth (McKinsey & Company, 2025; Forbes, 2025; PwC, 2025). Modern sources from 2025 confirm that AI-powered predictive maintenance now routinely reduces equipment downtime by thirty to fifty percent in live manufacturing environments, with average gains around forty percent and cost reductions of a similar magnitude. These results not only validate earlier benchmarks but show that maturity is bringing even stronger outcomes (Deloitte, 2017; IMEC, 2025; Innovapptive, 2025; F7i.AI, 2025).

Ten percent efficiency gains keep you in yesterday’s playbook. The breakthrough question is different: what would this function look like if we built it natively with AI? That reframe moves leaders from optimizing what exists to reimagining what’s possible, and it is the pivot that turns isolated pilots into transformative systems.

The Growth OS applied through AI is not a technology map, but a cultural framework. It sets a North Star around growth outcomes, where sales velocity accelerates, customer lifetime value expands, and revenue per employee becomes the measure of impact. It creates feedback loops where outcomes are captured, labeled, and fed back into systems. It promotes learning velocity by running disciplined experiments and making wins “always-on.” It scales trust by embedding governance, guardrails, and human judgment into workflows. The result is not just faster output, but a workforce and an enterprise designed to grow.

Culture remains the multiplier. When leaders anchor to growth outcomes like learning velocity and adoption rates, innovation compounds. When teams see AI as expansion rather than replacement, engagement rises. And when the entire approach is built on trust rather than control, the system generates value instead of resistance.

Efficiency is table stakes. Growth is leadership. AI will either keep you trapped in optimization or unlock a system of expansion. Which future you realize depends on the Growth OS you adopt and the culture you encode into it.

References

Canady, B. (2021). The Profitable Growth Operating System: A blueprint for building enduring, profitable businesses. ForbesBooks.

Deloitte. (2017). Predictive maintenance and the smart factory.

EY. (2024, December). AI Pulse Survey: Artificial intelligence investments set to remain strong in 2025, but senior leaders recognize emerging risks.

Forbes. (2025, June 2). 20 mind-blowing AI statistics everyone must know about now in 2025.

IMEC. (2025, August 4). From downtime to uptime: Using AI for predictive maintenance in manufacturing.

Innovapptive. (2025, April 8). AI-powered predictive maintenance to cut downtime & costs.

F7i.AI. (2025, August 30). AI predictive maintenance use cases: A 2025 machinery guide.

McKinsey & Company. (2025, March 11). The state of AI: Global survey.

PwC. (2025). Global AI Jobs Barometer.

Stanford HAI. (2024, September 9). 2025 AI Index Report.

Filed Under: AI Artificial Intelligence, Basil's Blog #AIa, Business, Content Marketing, Data & CRM, Sales & eCommerce Tagged With: AI, Growth Operating System

Platform Ecosystems and Plug-in Layers

August 25, 2025 by Basil Puglisi Leave a Comment

Basil Puglisi, GPT Store, Grok 4, Claude, Lakera Guard, Perplexity Pro, Sprinklr, EU AI Act, platform ecosystems, plug-in layers, compliance automation, enterprise AI

The plug-in layer is no longer optional. Enterprises now curate GPT Store stacks, Grok plug-ins, and compliance filters the same way they once curated app stores. The fact is adoption crossed three million custom GPTs in less than a year (OpenAI, 2024). The tactic is simple: use curated sections for research, compliance, or finance so workflows stay in line. It works because teams don’t lose time switching tools, and approval cycles sit inside the same stack. Who benefits? With a little checks and balances in the practices, the marketing and compliance directors who need assets reviewed before they move find streamlined value.

Grok 4 raises the bar with real-time search and document analysis (xAI, 2024). The tactic is to point it at sector reports or financials, then ask for stepwise summaries that highlight cost, revenue, or compliance gaps. It works because numbers land alongside explanations instead of scattered across drafts, with Grok this happens UpToDate and in real time, not just a database in the AI. The benefit goes to analysts and campaign planners who must build messages that hold up under review because the output sees everything up to date of prompt, not just copy that sounds good.

Google and Anthropic moved Claude into Vertex AI with global endpoints (Google Cloud, 2025). The fact is enterprises can now route traffic across regions with caching that lowers cost and latency. The tactic is to run coding and content workflows through Claude inside Vertex, where security and governance are already in place. It works because performance scales without losing control. Who benefits? Developers in regulated industries, when they invest in their process and speed matters but oversight cannot be skipped.

Perplexity and Sprinklr connect the research and compliance layer. Perplexity Deep Research scans hundreds of sources and produces cite-first briefs in minutes (Perplexity, 2025). The tactic is to slot these briefs directly into Sprinklr’s compliance filters, which flag tone or bias before responses go live (Sprinklr, 2025). It works because research quality and compliance checks are chained together. Who benefits? B2C brands that invest into their setup and new processes when they run campaigns across social channels where missteps are public and costly.

Lakera Guard closes the loop with real-time filters. Its July updates improved guardrails and moderation accuracy (Lakera, 2025). The tactic is to run assets through Lakera before they publish, measuring catch rates and logging exceptions. It works because risk checks move from manual review to automatic guardrails. Who benefits? Fortune 500 firms, SaaS providers, and nonprofits that cannot afford errors or policy violations in public channels.

Best Practice Spotlights
Dropbox integrated Lakera Guard with GPT Store plug-ins to secure LLM-powered features (Dropbox, 2024). Compliance approvals moved 30 percent faster, errors fell by 35 percent, not a typo. One lead said it was like plugging holes in a chessboard, the leaks finally stopped. The lesson is that when guardrails live inside the plug-in stack, speed and safety move together.

SoftBank worked with Perplexity Pro and Sprinklr to upgrade customer interactions in Japan (Perplexity, 2025). Cycle times fell 27 percent, exceptions dropped 20 percent, looked like plugging holes in a chessboard, and customer satisfaction lifted. The lesson is that compliance and engagement can run in parallel when the plug-in layer does the review work before the customer sees it.

Creative Consulting Corner
A B2B SaaS provider struggles with fragmented plug-ins and approvals that drag on for days. The solution is to curate a GPT Store stack for research and compliance, add Lakera Guard as a pre-publish filter, and track exceptions in a shared dashboard. Approvals move 30 percent faster, error rates drop, and executives defend budgets with proof. Optimization tip, publish a monthly compliance scorecard so the lift is visible.

A B2C retailer fights campaign fatigue and review delays. Perplexity Pro delivers cite-first briefs, Sprinklr’s compliance module flags tone and bias, and the team refreshes creative weekly. Cycle times shorten, ad rejection rates fall, and engagement lifts. Optimization tip, keep one visual anchor constant so recognition compounds even as content rotates.

A nonprofit faces the challenge of multilingual safety guides under strict donor oversight. Curated translation plug-ins feed Lakera Guard for risk filtering, with disclosure lines added by default. Time to publish drops, completion improves, complaints shrink. Optimization tip, keep a public provenance note so donors see transparency built in.

Closing thought
Here’s the thing, ecosystems only matter when they close the space between idea and approval. This doesn’t happen without some trial and error, then requires oversight, which sounds like a lot of manpower, but the output multiplies. GPT Store curates’ workflows, Grok 4 brings real-time analysis, Claude runs inside enterprise rails, Perplexity and Sprinklr steady research and compliance, and Lakera Guard enforces risk checks. With transparency labeling now a regulatory requirement, provenance and disclosure run in the background. The teams that treat ecosystems as infrastructure, not experiments, gain speed they can measure, trust they can defend, and credibility that lasts. The key is not to try to minimize but balance oversight with the ability to produce more.

References

Anthropic. (2025, July 30). About the development partner program. Anthropic Support.

Dropbox. (2024, September 18). How we use Lakera Guard to secure our LLMs. Dropbox Tech Blog.

European Commission. (2025, July 31). AI Act | Shaping Europe’s digital future. European Commission.

European Parliament. (2025, February 19). EU AI Act: First regulation on artificial intelligence. European Parliament.

European Union. (2025, July 24). AI Act | Shaping Europe’s digital future. European Union.

Google Cloud. (2025, May 23). Anthropic’s Claude Opus 4 and Claude Sonnet 4 on Vertex AI. Google Cloud Blog.

Google Cloud. (2025, July 28). Global endpoint for Claude models generally available on Vertex AI. Google Cloud Blog.

Lakera. (2024, October 29). Lakera Guard expands enterprise-grade content moderation capabilities for GenAI applications. Lakera.

Lakera. (2025, June 4). The ultimate guide to prompt engineering in 2025. Lakera Blog.

Lakera. (2025, July 2). Changelog | Lakera API documentation. Lakera Docs.

OpenAI. (2024, January 10). Introducing the GPT Store. OpenAI.

OpenAI Help Center. (2025, August 22). ChatGPT — Release notes. OpenAI Help.

Perplexity. (2025, February 14). Introducing Perplexity Deep Research. Perplexity Blog.

Perplexity. (2025, July 2). Introducing Perplexity Max. Perplexity Blog.

Perplexity. (2025, March 17). Perplexity expands partnership with SoftBank to launch Enterprise Pro Japan. Perplexity Blog.

Sprinklr. (2025, August 7). Smart response compliance. Sprinklr Help Center.

xAI. (2024, November 4). Grok. xAI.

Filed Under: AI Artificial Intelligence, Basil's Blog #AIa, Business, Content Marketing, Data & CRM, Digital & Internet Marketing, PR & Writing, Sales & eCommerce, Search Engines, SEO Search Engine Optimization, Social Media Tagged With: Business Consulting, Marketing

Mapping the July Shake-Up: Core Update Fallout, AI Overviews, and Privacy Pull

August 4, 2025 by Basil Puglisi Leave a Comment

Google core update, AI Overviews, zero-click searches, DuckDuckGo browser redesign, SEO August 2025, search engine market share, privacy search trends

July was a reminder that search never sits still. Google’s June 2025 Core Update, which officially finished on July 17, delivered one of the most disruptive shake-ups in years, reshuffling rankings across health, retail, and finance and leaving many sites searching for stability (Google, 2025; Schwartz, 2025a, 2025b). At the same time, AI Overviews continued to change user behavior in measurable ways — Pew Research found that when AI summaries appear, users click on traditional results nearly half as often, while Semrush reported they now show up in more than 13% of queries (Pew Research Center, 2025; Semrush, 2025). The result is clear: visibility is shifting from blue links to citations within AI-driven summaries, making structured content and topical authority more important than ever.

Privacy also took center stage. DuckDuckGo announced two updates in July: the option to block AI-generated images from results on July 14, and a browser redesign on July 22 that added real-time privacy feedback and anonymous AI integration (DuckDuckGo, 2025; PPC Land, 2025a, 2025b). These moves underscore how authenticity and trust are emerging as competitive differentiators, even as Google maintains close to 90% global market share (Statcounter Global Stats, 2025).

Together, these shifts point to an SEO environment defined by convergence: volatility from core updates, visibility challenges from AI Overviews, and renewed emphasis on privacy-first design. Success in this landscape depends on adapting quickly — not just to Google’s dominance, but to the broader dynamics of how people search, click, and trust.

What Happened

Google officially completed the June 2025 Core Update on July 17, after just over 16 days of rollout (Google, 2025; Schwartz, 2025a). This update was one of the largest in recent memory, driving heavy movement across industries. Search Engine Land’s data analysis showed that 16% of URLs ranking in the top 10 had not appeared in the top 20 before, the highest churn rate in four years (Schwartz, 2025b). Sectors like health and retail felt the sharpest volatility, while finance saw more stability. Even after the official end date, ranking swings remained heated through late July, reminding SEOs that recovery is rarely immediate (Schwartz, 2025c).

Layered onto this volatility was the accelerating role of AI Overviews. According to Pew Research, when an AI summary appears in search results, only 8% of users click on a traditional result, compared to 15% when no summary is present (Pew Research Center, 2025). Semrush data confirmed that AI Overviews now appear in more than 13% of queries, with categories like Science, Health, and People & Society seeing the fastest growth (Semrush, 2025). The combined effect is a steady rise in zero-click searches, with publishers and brands competing for visibility in citation panels rather than just the classic blue links.

Meanwhile, DuckDuckGo pushed its privacy-first positioning further. On July 14, it gave users the option to block AI-generated images from results (PPC Land, 2025a). Just days later, on July 22, it unveiled a browser redesign with a streamlined interface, real-time privacy feedback, and anonymous AI integration (DuckDuckGo, 2025; PPC Land, 2025b). These updates reinforce DuckDuckGo’s differentiation strategy, targeting users who value authenticity and transparency over algorithmic convenience.

Finally, Statcounter’s July snapshot reaffirmed Google’s dominance at nearly 90% global market share, with Bing at 4%, Yahoo at 1.5%, and DuckDuckGo under 1% (Statcounter Global Stats, 2025). Yet while small in volume, DuckDuckGo’s moves reflect a deeper trend — search diversification around privacy and user trust.

Factics: Facts, Tactics, KPIs

Fact: The June 2025 Core Update saw 16% of top 10 URLs newly ranked — the highest churn in four years (Schwartz, 2025b).

Tactic: Re-optimize affected pages by expanding topical depth and reinforcing E-E-A-T signals instead of pruning.

KPI: Average keyword position improvement across refreshed content.

Fact: Users click only 8% of traditional links when AI summaries appear, versus 15% when they don’t (Pew Research Center, 2025).

Tactic: Add FAQ schema, concise answer blocks, and authoritative citations to increase chances of inclusion in AI Overviews.

KPI: Ratio of impressions to clicks in Google Search Console for AI-affected queries.

Fact: DuckDuckGo’s July update introduced a browser redesign with privacy feedback icons and gave users the option to filter AI images (DuckDuckGo, 2025; PPC Land, 2025a, 2025b).

Tactic: Use original, source-cited visuals and message privacy in content strategy to attract DDG’s audience.

KPI: Month-over-month growth in DuckDuckGo referral traffic.

Lessons in Action

1. Audit, don’t panic. Map keyword drops against the June–July rollout window before making changes.

2. Optimize for Overviews. Treat AI summaries as a surface: concise content, schema markup, authoritative citations.

3. Invest in visuals. Replace AI-stock imagery with original media where possible.

4. Diversify your footprint. Google-first still rules, but dedicate ~10% of SEO effort to Bing and DuckDuckGo.

Reflect and Adapt

July’s landscape reinforces a truth: SEO is no longer only about blue links. The Core Update pushed volatility across industries, while AI Overviews are rewriting how people interact with results. Privacy-focused alternatives like DuckDuckGo are carving space by rejecting synthetic defaults. To thrive, brands need a portfolio approach — optimizing content to be cited in AI features, maintaining technical excellence for Google’s updates, and signaling authenticity where privacy matters. This isn’t fragmentation; it’s convergence around user trust and usefulness.

Common Questions

Q: Should I rewrite all content that lost rankings in July?
A: No. Benchmark affected pages against the June 30–July 17 update window and enhance quality; avoid knee-jerk deletions during volatility.

Q: How do I optimize for AI Overviews?
A: Structure answers clearly, use FAQ schema, and cite authoritative sources. Prioritize concise, trustworthy summaries.

Q: Does DuckDuckGo really matter with <1% global share?
A: Yes. Its audience skews privacy-first, meaning higher engagement and trust. Optimize for authenticity and clear privacy signals.

Q: Is Bing worth attention at ~4% share?
A: Yes. Bing’s integration with Microsoft products ensures sustained visibility, especially for enterprise and productivity-driven searches.

Embed Before Disclosure

📹 Google search ranking volatility remains heated – Search Engine Roundtable, July 25, 2025

Disclosure

This blog was written with the assistance of AI research and drafting tools, using only verified sources published on or before July 31, 2025. Human review shaped the final narrative, transitions, and tactical recommendations.

References

DuckDuckGo. (2025, July 22). DuckDuckGo browser: Fresh new look, same great protection. SpreadPrivacy. https://spreadprivacy.com/browser-visual-refresh/

Google. (2025, July 17). June 2025 core update [Status dashboard incident report]. Google Search Status Dashboard. https://status.search.google.com/incidents/riq1AuqETW46NfBCe5NT

Pew Research Center. (2025, July 22). Google users are less likely to click on links when an AI summary appears in the results. Pew Research Center. https://www.pewresearch.org/short-reads/2025/07/22/google-users-are-less-likely-to-click-on-links-when-an-ai-summary-appears-in-the-results/

PPC Land. (2025, July 14). DuckDuckGo users can now block AI images from search results. PPC Land. https://ppc.land/duckduckgo-users-can-now-block-ai-images-from-search-results/

PPC Land. (2025, July 24). DuckDuckGo browser redesign focuses on streamlined privacy interface. PPC Land. https://ppc.land/duckduckgo-browser-redesign-focuses-on-streamlined-privacy-interface/

Schwartz, B. (2025, July 17). Google June 2025 core update rollout is now complete. Search Engine Land. https://searchengineland.com/google-june-2025-core-update-rollout-is-now-complete-458617

Schwartz, B. (2025, July 24). Data providers: Google June 2025 core update was a big update. Search Engine Land. https://searchengineland.com/data-providers-google-june-2025-core-update-was-a-big-update-459226

Schwartz, B. (2025, July 25). Google search ranking volatility remains heated. Search Engine Roundtable. https://www.seroundtable.com/google-search-ranking-volatility-remains-heated-39828.html

Semrush. (2025, July 22). Semrush AI Overviews study: What 2025 SEO data tells us about Google’s search shift. Semrush Blog. https://www.semrush.com/blog/semrush-ai-overviews-study/

Statcounter Global Stats. (2025, July 31). Search engine market share worldwide. Statcounter. https://gs.statcounter.com/search-engine-market-share

Filed Under: AI Artificial Intelligence, AIgenerated, Business, Content Marketing, Search Engines, SEO Search Engine Optimization Tagged With: SEO

Open-Source Expansion and Community AI

July 28, 2025 by Basil Puglisi Leave a Comment

Basil Puglisi, LLaMA 4, DeepSeek R1 0528, Mistral, Hugging Face, Qwen3, open-source AI, SaaS efficiency, Spotify AI DJ, multimodal personalization

The table is crowded, laptops half open, notes scattered. Deadlines are already late. Budgets are thin, thinner than they should be. Expectations do not move with AI scanners and criticism on everything, the work has to feel human, or it fails, and as we learned in May looking professional now looks fake on apps like Originality.ai, the work got a lot harder.

The difference is in the stack. Open-source models carry the weight, community hubs fill the spaces between, and the outputs make it to the finish line without losing trust. LLaMA 4 reads text and images in one sweep. Mistral through Bedrock brings trust to structured-to-narrative work. Enterprises already living in that channel gain adoption without extra risk. Structured data like spreadsheets, changelogs, and other inputs turn into narratives that hold together. The tactic is to focus it on repetitive data-to-story tasks, then track cycle time from handoff to publish and the exception rate in review. It works best for data-heavy operations where speed and reliability keep clients from second guessing.

A SaaS director once waved an invoice like it was a warning flare. Costs had doubled in one quarter. The team swapped in DeepSeek and the bill fell by almost half. Not a typo. The panic eased because the math spoke louder than any promise. The point here is simple, when efficiency holds up in numbers, adoption sticks.

LLaMA 4 resets how briefs are built. Meta calls it “the beginning of a new era of natively multimodal AI innovation” (Meta, 2025). In practice it means screenshots, notes, and specs do not scatter into separate drafts. Claims tie directly to visuals and citations, so context stays whole. The tactic is to feed it real packets of work, then track acceptance rates and edits per draft. Who gains? Content teams, product leads, anyone who needs briefs to land clean on the first pass.

DeepSeek R1 0528 moves reasoning closer to the edge. MIT license, single GPU, stepwise logic baked in. Outlines arrive with examples and criteria already attached, so first drafts come closer to final. The tactic is to set it as the standard briefing layer, then measure reuse rates, time to first draft, and cost per inference. The groups that win are SaaS and mid-market players, the ones priced out of heavy hosted models but still expected to deliver consistency at scale.

Mistral through Bedrock brings trust to structured-to-narrative work. Enterprises already living in that channel gain adoption without extra risk. Spreadsheets, changelogs, and other structured inputs convert to usable narratives quickly. The tactic is to focus it on repetitive data-to-story tasks, then track cycle time from handoff to publish and the exception rate in review. It works best for data-heavy operations where speed and reliability keep clients from second guessing.

Hugging Face hubs anchor the collaborative side. Maintained repos, model cards, and stable translations replace half-built scripts and risky extensions. Localization that once dragged for weeks now finishes in days. The tactic is to pin versions, run checks in one space, and log provenance next to every output. Who benefits? Nonprofits, educators, consumer brands trying to work across languages without burning their budgets on agencies.

Regulation circles overhead. The EU presses forward with the AI Act, the U.S. keeps safety and disclosure in focus, and China frames AI policy as industrial leverage (RAND, 2025). The tactic is clear, keep provenance logs, consent registers, and export notes in the QA process. The payoff shows in fewer legal delays and faster audits. This matters most to exporters and nonprofits, groups that need both speed and credibility to hold stakeholder trust.

Best Practice Spotlights
BigDataCorp turned static spreadsheets into “Generative Biographies” with Mistral through Bedrock. Twenty days from concept to delivery. Client decision-making costs down fifty percent. Not theory. Numbers. One manager said it felt like plugging leaks in a boat. Suddenly the pace held steady. The lesson is clear, keep reasoning close to the data and adoption inside rails people already trust.

Spotify used LLaMA 4 to push its AI DJ past playlists. Narrated insights in English and Spanish, recommendations that felt intentional not random, discovery rates that rose instead of fading. Engagement held long after the novelty. The lesson is clear, blend multimodal reasoning with platform data and loyalty grows past the campaign window.

Creative Consulting Corner
A SaaS provider is crushed under inference bills. DeepSeek shapes stepwise outlines, Mistral converts structured fields, and LLaMA 4 blends inputs into explainers. Costs fall forty percent, cadence steadies, two hires get funded from the savings. Optimization tip, publish a dashboard with cycle times and costs so leadership argues from numbers, not gut feel.

A consumer retailer watches brand consistency slip across campaigns. LLaMA 4 drafts captions from product images and specs, Hugging Face handles localization, presets hold visuals in line. Assets land on time, carousel engagement climbs, fatigue slows. Optimization tip, keep one visual anchor steady each campaign, brand memory compounds.

A nonprofit needs multilingual safety guides with no agency budget. Hugging Face supplies translations, DeepSeek builds modules, and Mistral smooths phrasing. Distribution costs drop by half, completion improves, trust rises because provenance is logged. Optimization tip, publish a model card and rights register where donors can see them. Credibility is as important as cost.

Closing thought
Here is the thing, infrastructure only matters when it closes the space between idea and impact. LLaMA 4 turns mixed inputs into briefs that hold together, DeepSeek keeps structured reasoning affordable, Mistral delivers steady outputs inside enterprise rails, and Hugging Face makes collaboration practical. With provenance and rights running in the background, not loud but steady, teams gain speed they can measure, by using repetition in the checks and balances they can develop trust they can defend, and credibility that lasts.

References
AI at Meta. (2025, April 4). The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation.
C-SharpCorner. (2025, April 30). The rise of open-source AI: Why models like Qwen3 matter.
Apidog. (2025, May 28). DeepSeek R1 0528, the silent revolution in open-source AI.
Atlantic Council. (2025, April 1). DeepSeek shows the US and EU the costs of failing to govern AI.
MarkTechPost. (2025, May 30). DeepSeek releases R1 0528, an open-source reasoning AI model.
Open Future Foundation. (2025, June 6). AI Act and open source.
RAND Corporation. (2025, June 26). Full stack, China’s evolving industrial policy for AI.
Masood, A. (2025, June 5). AI use-case compass — Retail & e-commerce. Medium.
Measure Marketing. (2025, May 20). How AI is transforming B2B SaaS marketing. Measure Marketing.
McKinsey & Company. (2025, June 13). Seizing the agentic AI advantage.

Filed Under: AI Artificial Intelligence, Basil's Blog #AIa, Branding & Marketing, Business, Content Marketing, Data & CRM, Search Engines, Social Media, Workflow

Navigating SEO After Google’s June 2025 Core Update

July 7, 2025 by Basil Puglisi Leave a Comment

SEO 2025, Google June Core Update, AI Overviews, zero-click searches, structured data, Core Web Vitals, Bing SEO, Yandex optimization

Search visibility is in transition. Google’s June 2025 Core Update, which launched on June 30, shook rankings across industries while simultaneously underscoring how much search has moved beyond ten blue links. For many sites, the shift was dramatic: “Over 16% of URLs ranking in the top 10 after the update didn’t rank in the top 20 before,” according to Search Engine Land (2025). That volatility coincided with the expansion of AI Overviews, the persistence of zero-click behaviors, and continued pressure to deliver structured, mobile-first experiences.

The result is an SEO environment where the “so what” is clear: success is measured not only in rankings but also in impressions within AI summaries, eligibility for rich results, and performance across multiple engines. For marketers, the KPIs that matter now include ranking stability, AI Overview capture rate, Core Web Vitals pass percentage, and non-Google traffic share.

What Happened

Google’s June 2025 Core Update officially began rolling out on June 30. Within days, volatility was recorded across sectors, and by the time analysis was published, data providers confirmed it was among the most disruptive updates in recent memory. More than one in six of the top-10 URLs were newcomers, highlighting the magnitude of change (Search Engine Land, 2025).

At the same time, AI features accelerated. Semrush found AI Overviews appeared in 13.14% of queries by March, nearly doubling from January (Semrush, 2025). Google’s own disclosure at I/O emphasized that AI Mode and Overviews are driving over 10% incremental usage for query types where these features appear (Google, 2025). Yet visibility in these surfaces often comes without clicks. AdLift documented that 71% of searches now result in no organic click at all, leaving brands to measure impressions and mentions rather than traffic alone (AdLift, 2025).

Structured data remained central. Jameela Ghann’s June guide reinforced that JSON-LD markup unlocks higher CTRs through enhanced listings (Ghann, 2025), while Webflow’s July explainer stressed its scalability for larger SEO and Answer Engine Optimization projects (Webflow, 2025). Without schema, eligibility for snippets and AI summaries is severely limited.

Technical SEO continued to shape outcomes. Capsicum Media Works reported that only 47% of sites currently pass Core Web Vitals (2025). Clevertize emphasized that mobile performance is critical, urging marketers to prioritize responsive fixes and real-device testing (2025).

Finally, diversification remains essential. Lawrence Hitches observed Google’s global share at 89.54%, Bing with 7.5% in the U.S., and Yandex dominating Russia at 65% (2025). For brands with regional audiences, optimization can’t end with Google.

Why It Matters (Factics)

Fact: Over 16% of top-10 results after the June update were new entrants. [SEL]

Tactic: Annotate rankings during update windows, avoid reactive rewrites until volatility settles, and re-audit content depth post-rollout.

KPI: % of tracked keywords maintaining or regaining top-10 visibility after three weeks.

Fact: AI Overviews triggered in 13.14% of queries by March 2025. [Semrush]

Tactic: Structure content with clear H2/H3 headings, FAQs, and concise explanations to increase eligibility.

KPI: AI Overview capture rate across priority keywords.

Fact: 71% of queries produce no organic click. [AdLift]

Tactic: Shift reporting to include impressions, brand mentions, and AI visibility alongside CTR.

KPI: Ratio of impressions vs. clicks for high-value queries.

Fact: JSON-LD schema enables enhanced listings and scalability. [Ghann, Webflow]

Tactic: Audit site templates for Article, FAQ, and HowTo schema; validate with Google’s Rich Results Test.

KPI: Rich result eligibility % and CTR delta for enhanced vs. plain listings.

Fact: Fewer than half of sites pass Core Web Vitals. [Capsicum]

Tactic: Target LCP <2.5s, INP <200ms, CLS <0.1; prioritize fixes on mobile templates.

KPI: % of URLs passing CWV in Search Console (mobile and desktop).

Fact: Mobile performance is decisive for rankings. [Clevertize]

Tactic: Prioritize responsive design, compress images, test on real devices.

KPI: Mobile vs. desktop CWV performance deltas.

Fact: Bing holds 7.5% U.S. share; Yandex dominates Russia with 65%. [Hitches]

Tactic: Maintain Bing Places listings, localize for Yandex, and track regional engine performance.

KPI: Traffic diversification across engines.

Fact: AI Mode increased query volume by >10% in supported markets. [Google]

Tactic: Optimize for entity clarity, authoritative sourcing.

KPI: Sessions referred from AI Mode experiences.

Lessons in Action

1. Wait, then act: Don’t rewrite content mid-rollout. Hold steady until rankings stabilize.

2. Schema at scale: Ensure JSON-LD coverage across Article, FAQ, and HowTo templates.

3. Measure visibility differently: Add AI Overview impressions and brand mentions to dashboards.

4. Fix technical debt: Improve LCP, INP, and CLS — especially on mobile.

5. Diversify engines: Maintain presence in Bing and Yandex for regional resilience.

Reflect and Adapt

SEO in July 2025 is about more than winning keywords. Google’s update reinforced the importance of trustworthy, structured content, while AI Overviews and zero-click behavior redefined how success is measured. Technical SEO remains a differentiator, and multi-engine optimization protects reach. The lesson: broaden metrics, strengthen fundamentals, and position content for both human readers and AI-driven systems.

Common Questions

Q: Should I react immediately to ranking drops after an update?

A: No. Core updates bring volatility. Wait for stabilization before making significant changes.

Q: How do I measure success when clicks decline?

A: Track impressions, AI Overview presence, and brand mentions — not just CTR.

Q: Is schema markup optional?

A: No. Structured data is now essential for eligibility in rich results and AI summaries.

Disclosure

This article was created with the assistance of AI research systems. All nine sources were independently verified, publicly accessible, and published on or before June 30, 2025 unless noted for update completion.

References

Search Engine Land. (2025, July 17). Google June 2025 core update rollout is now complete. https://searchengineland.com/google-june-2025-core-update-rollout-is-now-complete-458617

Semrush. (2025, July 22). AI Overviews Study: What 2025 SEO Data Tells Us. https://www.semrush.com/blog/semrush-ai-overviews-study/

AdLift. (2025, July 1). What Is Zero Click Search? https://www.adlift.com/blog/zero-click-search-seo-strategy/

Ghann, J. (2025, June 18). How to Use Structured Data & Schema for Blog SEO. https://www.jameelaghann.com/marketing-lab/how-to-use-structured-data-schema-blog

Webflow. (2025, July 31). Schema markup explained. https://webflow.com/blog/schema-markup

Capsicum Media Works. (2025, June 30). Core Web Vitals: Ultimate SEO Guide for 2025. https://capsicummediaworks.com/core-web-vitals/

Clevertize. (2025, June 26). Core Web Vitals for the 2025 Update. https://clevertize.com/blog/mastering-core-web-vitals-for-the-2025-update/

Hitches, L. (2025, July 1). Differences Between Search Engines. https://www.lawrencehitches.com/search-engine-differences/

Google. (2025, May 20). AI Mode in Google Search. https://blog.google/products/search/google-search-ai-mode-update/

Filed Under: AI Artificial Intelligence, AIgenerated, Business, Content Marketing, Search Engines, SEO Search Engine Optimization Tagged With: SEO

Creative Collaboration and Generative Design Systems

June 23, 2025 by Basil Puglisi Leave a Comment

Basil Puglisi, generative design systems, HeyGen Avatar IV, Adobe Firefly, Canva AI, DeepSeek R1, ElevenLabs, Surfer SEO, AI content workflow, marketing compliance, brand safety

A small team stares at a crowded content calendar.  New campaigns, product notes, community updates.  The budget will not stretch, the deadline will not move.  The stack does the heavy lifting instead.  One photograph becomes a spokesperson video.  Design ideas are worked up inside the tools the team already knows.  Reasoning support runs on modest hardware.  Audio moves from a single narrator to a believable conversation.  Compliance sits inside the process, quiet and steady.

This is where the change shows up.  A single script turns into localized clips that feel more human because eye contact, small gestures, and natural pacing keep attention.  Design stops waiting for a specialist because brand safe generation lives in the same place as the layout.  A reasoning model helps shape briefs and outlines without a big infrastructure bill, while authority scoring keeps written work aligned to what search engines consider credible.  Audio that once sounded flat now carries different voices, different roles, and a rhythm that holds listeners.

“The economic impact of generative AI in design is estimated at 13.9 billion dollars, driven by efficiency and ROI gains across enterprises and SMBs.” via ProCreator

HeyGen Avatar IV turns a still photo into a spokesperson video that feels human. It renders in 1280p plus with natural hand movement, head motion, and expressive facial detail so the message holds attention. Use it by writing one master script, loading an approved headshot with likeness rights, selecting the avatar style, and generating localized takes with recorded voice or text to speech. Put these clips on product explainers, onboarding steps, and multilingual FAQs. Track video completion rate, time to localize per language, and demo conversions from pages that embed the clip.

Adobe Firefly for enterprise serves as the safe image engine inside the design stack. Brand tuned models and commercial protections keep production compliant while teams create quickly. Put it to work by encoding your brand style as prompts, building a small library of approved backgrounds and treatments, and routing outputs through quick review in Creative Cloud. Replace the slow concepting phase with three to five generated options, curate in minutes, then finalize in Illustrator or Photoshop. Measure cycle time per concept, legal exceptions avoided, and consistency of brand elements across campaigns.

Canva AI turns day to day layout needs into a repeatable system non designers can run. The tools generate variations, resize intelligently, and preserve spacing and hierarchy across formats. Use it by creating master templates for social, email headers, blog art, and one pagers, then generate audience specific variations and export the whole set at once. Push directly to channels so creative does not go stale. Watch cycle time per asset, engagement lift after refresh, and paid performance stability as fatigue drops.

DeepSeek R1 0528 is a distilled reasoning model that runs on a single GPU, which keeps structured thinking affordable. Use it to shape briefs, outlines, and acceptance criteria that writers and designers can follow. Feed competitor pages, internal notes, and product context, then ask for a stepwise outline with evidence requirements and concrete examples. The goal is to standardize planning so first drafts land closer to done. Track outline acceptance rate, time to first draft, and cost per inference against larger hosted models.

Surfer authority signals bring credibility cues into the planning desk. The tool reads the competitive landscape, suggests topical coverage, and scores content against what search engines reward. Operationalize it by building a topical map, selecting gaps with realistic difficulty, and attaching internal link targets before drafting. Publish and refresh as signals move to maintain visibility. Measure non brand rankings on priority clusters, correlation between content score and traffic, and new internal linking opportunities created per month.

ElevenLabs voices convert flat narration into believable audio across languages. Professional and instant cloning capture tone and clarity so training and help content keep attention. Use it by collecting consented voice samples, creating role profiles, and generating multi voice versions of modules and support pages. For nonprofits and education, script a facilitator plus learner voice; for product, add a support expert voice for tricky steps. Track listen through rate, course completion, and support ticket deflection from pages with audio.

Regulatory pressure has not eased.  Name, image, and likeness protections are active topics, entertainment lawyers list AI related IP disputes among their top issues, and federal guidance clarifies expectations for training data and provenance.  It is practical to keep watermarking, rights clearances, and transparent sourcing inside the workflow so speed gains do not turn into risk later.

Best Practice Spotlights

Unigloves Derma Shield

A professional product line required launch visuals without the drag of traditional shoots.  The team generated hyper realistic imagery with Firefly and Midjourney, then refined compositions inside the design pipeline.  The process trimmed production time by more than half and kept a consistent look across audiences.  Quality and speed aligned because generation and curation lived in the same place.

Coca Cola Create Real Magic

A global brand invited fans to make branded art using OpenAI tools.  The community answered, and the creative volume pushed past a single campaign window.  The result was felt in engagement and brand affinity, not just in one round of impressions.  For smaller teams, the lesson is to schedule community creation, then curate and repurpose the best pieces across owned and paid placements.

Creative Consulting Corner

A small SaaS company needs product explainers in several languages.  HeyGen provides lifelike presenters and Firefly supplies consistent visuals, while authority checks in Surfer help the written support pages hold up in search.  Demo interest rises because the materials are easier to understand and arrive on time.

A regional retailer wants seasonal refreshes that do not crawl.  Canva AI handles layouts, Firefly supplies on brand variations, and short voice tags from ElevenLabs localize the message for different cities.  The work ships quickly, social engagement lifts, and paid results improve because creative does not go stale.

An advocacy nonprofit must train volunteers across communities.  NotebookLM offers portable audio overviews of core modules, while multi voice dialogue in ElevenLabs simulates the feel of a group session.  Visuals produced in Canva, with Firefly elements, keep the story familiar across channels.  Completion goes up and more volunteers stay with the program.

Closing thought

Infrastructure matters when it shortens the time between idea and impact.  Avatars make messages feel human without crews.  Design systems keep brands steady while production scales.  Reasoning supports content that stands up to review.  Multi voice audio invites people into the story.  With provenance, rights, and disclosure running in the background, teams earn speed they can measure, trust they can defend, and credibility that lasts.

References

AKOOL. (2025, April 9). HeyGen alternatives for AI videos & custom avatars. https://akool.com/blog-posts/heygen-alternatives-for-ai-videos-custom-avatars

Adobe Inc. (2025, March 18). Adobe Firefly for Enterprise | Generative AI for content creation. https://business.adobe.com/products/firefly-business.html

B2BSaaSReviews. (2025, January 8). 10 best AI marketing tools for B2B SaaS in 2025. https://b2bsaasreviews.com/ai-marketing-tools-b2b/

Baytech Consulting. (2025, May 30). Surfer SEO: An analytical review 2025. https://www.baytechconsulting.com/blog/surfer-seo-an-analytical-review-2025

Databox. (2024, October 17). AI adoption in SMBs: Key trends, benefits, and challenges from 100+ SMBs. https://databox.com/ai-adoption-smbs

DataFeedWatch. (2025, March 10). 11 best AI advertising examples of 2025. https://www.datafeedwatch.com/blog/best-ai-advertising-examples

DhiWise. (2025, May 27). ElevenLabs AI audio platform: Game-changer for creators. https://www.dhiwise.com/post/elevenlabs-ai-audio-platform

ElevenLabs. (2023, August 20). Professional voice cloning: The new must-have for podcasters. https://elevenlabs.io/blog/professional-voice-cloning-the-new-must-have-for-podcasters

ElevenLabs. (2025, February 8). ElevenLabs voices: A comprehensive guide. https://elevenlabs.io/voice-guide

Forbes. (2024, October 15). Driving real business value with generative AI for SMBs and beyond. https://www.forbes.com/sites/garydrenik/2024/10/15/driving-real-business-value-with-generative-ai-for-smbs-and-beyond/

G2. (2025, March 20). Adobe Firefly reviews 2025: Details, pricing, & features. https://www.g2.com/products/adobe-firefly/reviews

Google Cloud. (2024, October 2). Generating value from generative AI: Global survey results. https://cloud.google.com/transform/survey-generating-value-from-generative-ai-roi-study

HeyGen. (2025, May 23). A comprehensive guide to filming lifelike custom avatars. https://www.heygen.com/blog/a-comprehensive-guide-to-filming-lifelike-custom-avatars

HeyGen. (2025, May 23). Create talking photo avatars in 1280p+ HD resolution. https://www.heygen.com/avatars/avatar-iv

Hugging Face. (2025, May 29). deepseek-ai/DeepSeek-R1-0528. https://huggingface.co/deepseek-ai/DeepSeek-R1-0528

Madgicx. (2025, April 30). The 10 most inspiring AI marketing campaigns for 2025. https://madgicx.com/blog/ai-marketing-campaigns

Markopolo.ai. (2025, March 13). Top 10 digital marketing case studies [2025]. https://www.markopolo.ai/post/top-10-digital-marketing-case-studies-2025

NYU Journal of Intellectual Property & Entertainment Law. (2024, February 29). Beyond incentives: Copyright in the age of algorithmic production. https://jipel.law.nyu.edu/beyond-incentives-copyright-in-the-age-of-algorithmic-production/

ProCreator. (2025, January 27). The $13.9 billion impact of generative AI design. https://procreator.design/blog/billion-impact-generative-ai-design/

ResearchGate. (2025, February 11). The impact of generative AI on traditional graphic design workflows. https://www.researchgate.net/publication/378437583_The_Impact_of_Generative_AI_on_Traditional_Graphic_Design_Workflows

Salesgenie. (2025, April 29). Discover how AI can transform sales and marketing for SMBs. https://www.salesgenie.com/blog/ai-sales-marketing/

Surfer SEO. (2025, January 27). What’s new at Surfer? Product updates January 2025. https://surferseo.com/blog/january-2025-update/

TechCrunch. (2025, May 29). DeepSeek’s distilled new R1 AI model can run on a single GPU. https://techcrunch.com/2025/05/29/deepseeks-distilled-new-r1-ai-model-can-run-on-a-single-gpu/

U.S. Copyright Office. (2025, May 6). Generative AI training report. https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf

U.S. Patent and Trademark Office. (2024, August 5). Name, image, and likeness protection in the age of AI. https://www.uspto.gov/sites/default/files/documents/080524-USPTO-Ai-NIL.pdf

Variety. (2025, April 9). Variety’s 2025 Legal Impact Report: Hollywood’s top attorneys. https://variety.com/lists/legal-impact-report-2025-hollywood-top-attorneys/

Filed Under: AI Artificial Intelligence, Basil's Blog #AIa, Branding & Marketing, Business, Workflow

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 12
  • Go to Next Page »

Primary Sidebar

For Small Business

Facebook Groups: Build a Local Community Following Without Advertising Spend

Turn Google Reviews Smarter to Win New Customers

Save Time with AI: Let It Write Your FAQ Page Draft

Let AI Handle Your Google Profile Updates

How to Send One Customer Email That Doesn’t Get Ignored

Keep Your Google Listing Safe from Sneaky Changes

#AIgenerated

Spam Updates, SERP Volatility, and AI-Driven Search Shifts

Navigating SEO in a Localized, Zero-Click World

Communities Fragment, Platforms Adapt, and Trust Recalibrates #AIg

Yahoo Deliverability Shake-Up & Multi-Engine SEO in a Privacy-First World

Social Media: Monetization Races Ahead, Earnings Expand, and Burnout Surfaces #AIg

SEO Map: Core Updates, AI Overviews, and Bing’s New Copilot

YouTube Shorts, TikTok, Meta Reels, and X Accelerate Creation, Engagement, and Monetization #AIg

Surviving February’s Volatility: AI Overviews, Local Bugs, and Technical Benchmarks

Social Media: AI Tools Mature, Testing Expands, and Engagement Rules #AIg

Navigating Zero-Click SERPs and Local Volatility Now

Social Media: Social Commerce Surges, Affiliate Models Scale, and Trust Questions Persist #AIg

Proving E-E-A-T in a Post-AI World

More Posts from this Category

#SMAC #SocialMediaWeek

Basil Social Media Week

Digital Ethos Holiday Networking

Basil Speaking for Digital Ethos
RSS Search

@BasilPuglisi Copyright 2008, Factics™ BasilPuglisi.com, Content & Strategy, Powered by Factics & AI,