• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Home
  • About Me
    • Teaching / Speaking / Events
  • Book: Governing AI
  • Book: Digital Factics X
  • AI – Artificial Intelligence
    • Ethics of AI Disclosure
  • AI Learning
    • AI Course Descriptions
  • AI Policy

@BasilPuglisi

Content & Strategy, Powered by Factics & AI, Since 2009

  • Headlines
  • My Story
    • Engagements & Moderating
  • AI Thought Leadership
  • Basil’s Brand Blog
  • Building Blocks by AI
  • Local Biz Tips

From AI Policy to Financial System Design What US Dept of Treasury’s AI Innovation Series Actually Signals

March 27, 2026 by Basil Puglisi Leave a Comment

Treasury’s March 2026 AI Innovation Series is not a standalone announcement. It is the operational phase of a two-year sequence that now treats AI adoption as a financial stability issue, a competitiveness issue, and a regulatory design issue at the same time.

Layered illustration showing policy documents, shared frameworks, and a convening table representing Treasury's AI sequence

Failure to Adopt Is Now a Risk Category

Treasury’s March 20, 2026, announcement is not just another Washington roundtable notice. It is a signal that the federal government now treats AI adoption in finance as a financial stability issue, a competitiveness issue, and a regulatory design issue at the same time. The new AI Innovation Series, launched by the Office of the Financial Stability Oversight Council and Treasury’s Artificial Intelligence Transformation Office, is framed as a public private initiative to support the strength and resilience of the U.S. financial system as AI moves deeper into fraud detection, cybersecurity, credit underwriting, and operational risk management. Treasury is no longer describing AI as a side technology, because the department now treats AI as infrastructure inside the financial system itself.

That change in posture matters. The key statement in the announcement comes from Secretary Scott Bessent, who says Treasury is moving from a posture focused on constraint toward one that recognizes failure to adopt productivity enhancing technology as its own risk. That is more than pro innovation language, because it reframes regulatory risk itself. Treasury is saying that lagging adoption can weaken resilience, not just that reckless adoption can create harm. Deputy Assistant Secretary for FSOC Christina Skinner reinforced this framing when she stated that AI adoption is critical to America’s financial stability and a precondition to economic growth, and that institutions unable to deploy tools that improve fraud detection, credit allocation, and operational resilience make the system less efficient and less secure. In plain English, the department is trying to shift the policy conversation from “How do we stop AI from going wrong?” to “How do we keep the financial system safe while making sure it does not fall behind?”

Three Reports Built the Foundation Before the Roundtables

This announcement ties directly to work Treasury has already been building for more than two years. In March 2024, Treasury published its report on Managing Artificial Intelligence Specific Cybersecurity Risks in the Financial Services Sector. That report, written at the direction of Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, identified immediate challenges such as a growing capability gap between large and small institutions and a fraud data divide that makes it harder for firms to train stronger defenses. Treasury’s conclusion then was not to halt adoption but to strengthen information sharing, improve defensive capacity, and build better public private coordination. That early report established the cybersecurity and fraud case for AI governance in finance.

Treasury then widened the lens in December 2024 with its report on the Uses, Opportunities, and Risks of Artificial Intelligence in Financial Services. Based on 103 comment letters received in response to its June 2024 Request for Information, the report found AI already present across compliance, underwriting, customer service, treasury management, internal operations, and product development. It also summarized feedback calling for aligned definitions, clearer standards for data privacy and security, stronger consumer protections, better coordination across regulators, and more public private collaboration to monitor concentration risk and share best practices. Under Secretary for Domestic Finance Nellie Liang stated at the time that Treasury was continuing to engage with stakeholders to deepen its understanding of current uses, opportunities, and associated risks of AI in the financial sector.

The practical resource layer arrived in February 2026. Treasury announced two financial sector tools, an AI Lexicon and a Financial Services AI Risk Management Framework, specifically to give institutions, regulators, and technology providers a common language and more consistent risk practices. Derek Theurer, performing the duties of Deputy Secretary, stated that implementing the President’s AI Action Plan requires practical resources that institutions can use, not aspirational statements. Treasury noted that inconsistent terminology and uneven risk management had become barriers to effective governance and oversight. The FS AI RMF adapts the NIST AI Risk Management Framework to the specific operational, regulatory, and consumer protection considerations of financial services and is designed to be scalable across institutions of varying size and complexity, according to Treasury’s Chief AI Officer Paras Malik.

In the same month, Treasury also announced the conclusion of a major public private initiative through the Artificial Intelligence Executive Oversight Group, a partnership between the Financial and Banking Information Infrastructure Committee and the Financial Services Sector Coordinating Council. That body’s workstreams addressed governance, data practices, transparency, fraud, and digital identity in an integrated way, with six resources released in stages throughout February.

That means the March Innovation Series is not step one but step three. First Treasury studied the field, then it published shared tools, and now it is convening the actors who need to translate those tools into practice.

FSOC Created the Institutional Mandate for the Series

The strongest way to understand the new series is as an operational bridge between strategy and supervision. The 2025 FSOC Annual Report, approved unanimously by the Council on December 11, 2025, already created an AI working group to explore opportunities for AI to promote financial system resilience while also monitoring risks to financial stability. The Council recommended that member agencies use the working group to identify high value use cases that agencies can adapt to improve the efficiency and efficacy of regulation and supervision, and to provide a forum for public private dialogue to identify regulatory impediments to responsible adoption by financial institutions. The report was notably restructured to highlight four priority areas of focus, each with actionable policy recommendations: bolstering Treasury market resilience, addressing cyber threats, enhancing bank supervision and regulation, and harnessing artificial intelligence to promote financial stability.

That institutional mandate carries a framing signal as well. Secretary Bessent’s framing in the FSOC annual report introduced the concept of “Parallel Prosperity,” an era of economic expansion where Wall Street and Main Street grow together. That framing carries through into the Innovation Series announcement. Treasury’s new roundtable series is essentially the public expression of the FSOC mandate. The four roundtables are not random convenings but the implementation vehicle for a policy direction that FSOC had already set in its annual report.

Treasury Is Governing AI Internally While Shaping External Practice

This ties to Treasury’s own internal modernization. Treasury’s AI Strategy, issued in September 2025 by Chief AI Officer Paras Malik and Secretary Bessent in compliance with OMB Memorandum M-25-21, describes a department already applying AI to detect suspicious transaction patterns, forecast economic trends, improve taxpayer services, analyze procurement anomalies, speed document processing, and modernize legacy code. The strategy also describes Treasury’s federated AI governance structure: an AI Governance Board that manages centralized funding and validates high impact use cases, an AI Council of technical leads meeting monthly to review use cases and exchange practices, an AI Transformation Office that coordinates training and policy development, a secure chat pilot, a higher security AI Sandbox, and a tiered oversight approach for high, moderate, and low impact use cases.

That context matters because Treasury is not speaking to the market from the outside. It is building internal AI governance while simultaneously trying to shape external market practice. At the 2025 AWS Federal AI Conference, Malik described a dual track approach where Treasury deploys AI tools for immediate daily impact while simultaneously building a long term strategy to redesign broader processes. That gives the Innovation Series more weight. Treasury is not only telling financial firms to operationalize AI but trying to operationalize it within its own institution.

The Government Accountability Office Confirms the Landscape

Independent oversight corroborates the picture Treasury is building. The Government Accountability Office published its report on Artificial Intelligence: Use and Oversight in Financial Services in May 2025, finding that financial institutions already use AI for countering threats and illicit finance, making credit decisions, managing risk, improving customer service, and enhancing operational efficiency. The GAO report also found that federal financial regulators primarily oversee AI through existing laws, regulations, guidance, and risk based examinations rather than through AI specific rulemaking, and that all regulators using AI as of December 2024 reported using AI outputs in conjunction with other information to inform decisions rather than relying on autonomous AI decision making.

The GAO report identified two gaps. First, the National Credit Union Administration lacks the authority to examine technology service providers, despite credit unions’ increasing reliance on them for AI driven services. Second, NCUA’s model risk management guidance is limited in scope and detail. The GAO recommended that Congress consider granting NCUA examination authority and that the agency enhance its model risk management guidance. Those findings reinforce the structural gap that Treasury’s roundtable series is designed to address: the uneven capacity of regulators and institutions to govern AI at the same level of rigor.

The White House AI Action Plan Sets the Administration Framework

The larger policy tie is the White House AI Action Plan, released on July 23, 2025, as “Winning the Race: America’s AI Action Plan.” That plan identifies more than 90 federal policy actions across three pillars: accelerating innovation, building American AI infrastructure, and leading in international diplomacy and security. It calls for sector specific implementation, practical standards, expanded AI literacy, and accelerated adoption throughout the federal government. Treasury’s own AI finance agenda fits within this framework. The White House wants deployment that supports growth and competitiveness; Treasury is applying that philosophy to the financial system by treating AI as a productivity and resilience instrument, while still preserving the language of safety, soundness, and national security.

The Action Plan explicitly directs agencies to remove regulatory barriers to AI deployment, mandates that federal employees whose work could benefit from frontier language models have access to those tools, and calls for agencies to share AI use cases across government. It also assigns Treasury a role in helping shape AI skill development and broader economic adaptation. Treasury is already pursuing that mandate. Its September 2025 strategy notes plans to make AI literacy and skill development programs eligible for education assistance under Internal Revenue Service code, a provision aligned with the Action Plan’s workforce development objectives.

International Bodies Sound the Systemic Risk Warning

The Financial Stability Board’s November 2024 report, The Financial Stability Implications of Artificial Intelligence, provides the international counterpoint. The FSB found that rapid AI adoption in finance and limited data on AI usage mean authorities should enhance monitoring, assess whether current supervisory and regulatory frameworks are adequate, and enhance regulatory and supervisory capabilities. The report identified four categories of AI related vulnerability with potential to increase systemic risk: third party dependencies and service provider concentration, market correlations from widespread use of similar AI models and training data, cyber vulnerabilities lowering barriers for sophisticated attacks, and model risk compounded by limited explainability and opaque training data.

The FSB followed up in October 2025 with a report examining how financial authorities can monitor AI adoption and assess related vulnerabilities. That report found that many financial authorities are still in an early stage of monitoring AI related vulnerabilities, with data collection challenges including lack of agreed definitions for AI, persistent data gaps, and difficulties in assessing the criticality of AI services. The FSB recommended that national authorities enhance their monitoring approaches, collaborate with domestic stakeholders to formalize metrics, and explore AI tools to both monitor and mitigate vulnerabilities.

Both Treasury and the FSB are converging on the same structural observation: rapid adoption is outpacing the monitoring infrastructure. Treasury’s Innovation Series represents the domestic attempt to close that gap through public private coordination rather than prescriptive rulemaking.

The SEC Signals Principles Over Prescriptions

The signal for regulators themselves came clearly on March 4, 2026, when SEC Chair Paul Atkins delivered remarks at the first FSOC AI Innovation Series Roundtable on Strategy and Governance Principles. Atkins described AI as a force that enables investors to participate in markets with greater confidence, businesses to allocate capital with sharper precision, and regulators to oversee markets with deeper insight. He highlighted the SEC’s AI Task Force, created in August 2025, which deploys AI tools across the agency for enforcement, supervision, and internal operations.

Atkins was explicit that the SEC intends to use existing statutory tools to address AI related risks in securities markets rather than seek new AI specific legislation. He favored principles based rules rooted in materiality over prescriptive mandates, stating that disclosure checklists are no substitute for materiality based transparency under established principles. At the same time, he cautioned that algorithmic detection of possible misconduct cannot supplant the considered judgment of commissioners and staff or serve as the sole basis of an SEC enforcement action. He also signaled interest in an innovation exemption concept, a sandbox like environment that is cabined, time limited, transparent, flexible, and focused on investor protection.

That means Treasury is treating AI not only as something firms must govern but also as something supervisors should learn to use, and the SEC is already pursuing that path internally. The implication is that financial oversight may itself become more AI assisted, which raises its own governance questions around explainability, accountability, and due process.

From Defensive Governance to Performance Governance

The phrase “safety and soundness” is the hinge. Treasury is not abandoning risk management but changing the order of emphasis. Older debates often started with fear of AI error, bias, opacity, or consumer harm. Treasury still recognizes those issues. Its 2024 cybersecurity report flags privacy, bias, third party dependency, and the need to review AI use cases for compliance before deployment. Its December 2024 financial services report recommends that firms prioritize review of AI use cases for compliance with existing laws and regulations before deployment and periodically reevaluate compliance as needed. The GAO’s May 2025 report documents that regulators themselves are approaching AI with measured caution, using outputs alongside other information rather than as autonomous decision inputs.

But the 2026 announcement places those concerns inside a new thesis: disciplined AI adoption is part of resilience, and institutions that cannot deploy tools that improve fraud detection, credit allocation, and operational resilience may become less efficient and less secure. Malik’s statement that the priority is now on embedding AI into core workflows in ways that measurably enhance risk management and resilience shows the department wants proof of performance, not just proof of experimentation. That language is important because it frames AI in finance as a workflow and controls problem, not a branding exercise.

There is a strategic logic behind that shift. Finance is one of the sectors where AI can generate immediate value in pattern recognition, fraud detection, operational efficiency, document review, anomaly spotting, and customer interaction. The GAO report confirmed that financial institutions are already realizing cost savings, faster credit decisions, and improved fraud detection through AI deployment. Treasury’s own materials repeatedly point to these functions. So the department appears to be drawing a boundary between reckless automation and governed operationalization.

Three Factics Chains for Monitoring Outcomes

Factics (Facts + Tactics + KPIs) is a methodology for converting information into action by pairing every factual observation with an executable tactic and a measurable outcome. Three chains apply here.

Chain 1: Measuring Series Outputs

The fact is that Treasury, FSOC, and related Treasury backed workstreams have now created a sequence: risk identification (2024 cybersecurity report), common vocabulary (February 2026 AI Lexicon), sector specific framework (February 2026 FS AI RMF), internal governance infrastructure (September 2025 Treasury AI Strategy), and public private roundtables (March 2026 Innovation Series). The tactic is to read the Innovation Series as a systems building exercise rather than a press cycle. The KPI is whether the four roundtables produce concrete outputs such as supervisory guidance clarifications, adoption benchmarks, cross industry controls, sector data standards, or clearer expectations for third party oversight. Without those outputs, the series remains a convening exercise; with them, it becomes architecture.

Chain 2: Tracking Concentration and Systemic Risk

The fact is that both Treasury and international financial stability bodies have warned that AI can amplify vulnerabilities through third party dependencies, opaque models, and concentrated providers. Treasury’s 2024 cybersecurity report recommends more collaboration and monitoring of concentration risk. The FSB’s November 2024 report identified third party dependencies and service provider concentration as a top systemic vulnerability. The FSB’s October 2025 follow up report found that monitoring efforts remain at an early stage, with persistent data gaps around AI supply chain dependencies. A September 2025 RAND Corporation report warned that many institutions using the same AI models could produce synchronized market movements and amplified volatility patterns that extend beyond traditional algorithmic trading risks.

The tactic is to measure not only AI adoption, but also where adoption concentrates. The KPI is concentration exposure by model provider, cloud dependency, external data source dependency, and critical workflow concentration across large institutions. The concentration risk that Treasury and the FSB identify here is the same structural vulnerability that the AI Provider Plurality infrastructure proposal addresses at the federal level through mandatory API accessibility, investment in small platforms, and anti-concentration protections.

Chain 3: Ensuring Competition and Inclusion

Treasury’s March 2024 cybersecurity report warned of a widening capability gap between large and small financial institutions and a fraud data divide. The GAO’s May 2025 report found that trade associations cautioned about costs associated with developing or acquiring AI that may put some tools out of reach for smaller institutions. Treasury’s AIEOG initiative specifically noted that its resources were designed to help institutions, particularly small and mid sized institutions, harness the power of AI to strengthen cyber defenses. The tactic is to ensure that any Treasury led framework produces adoption pathways that do not become an incumbency subsidy. The KPI is adoption quality by institution size, fraud loss reduction by institution class, and vendor risk concentration among small and midsize firms.

Optimization Versus Oversight: Where the Policy Fight Sits

The governance question is what Treasury means by “optimizing regulation.” The announcement suggests the department wants to identify where regulation or enforcement posture may be slowing valuable deployment. The FSOC annual report uses similar language when it says the AI working group will provide a forum for public private dialogue to identify regulatory impediments to responsible adoption. Some coverage has framed this as a regulatory reduction exercise, noting that the roundtables will discuss how to cut federal rules to give businesses more leeway. A coalition of technology, consumer, and civil rights groups has cautioned that AI’s potential benefits for consumers, investors, and the financial system can only materialize if people are protected through consistent application and enforcement of federal civil rights, consumer protection, investor protection, market integrity, and financial supervision statutes.

That tension is real. If “optimization” becomes a code word for lowering controls without strengthening oversight, the system may move faster but become more brittle. If it instead means clarifying supervisory expectations, aligning terminology, improving data standards, and reducing contradictory requirements across agencies, then it can produce the kind of disciplined acceleration Treasury says it wants. The line between those two outcomes is where the real policy fight sits.

That line has a name. The three-tier governance distinction separates Ethical AI (should this be done?), Responsible AI (who answers when this fails?), and AI Governance (who decides, by what authority, at what checkpoint?). Treasury’s lexicons, frameworks, and risk management tools operate in the Responsible AI tier, while the FSOC working group’s checkpoint authority over roundtable outputs and supervisory coordination operates closer to the AI Governance tier. Whether Treasury crosses from one to the other depends on whether the series produces enforceable infrastructure or additional voluntary guidance.

What This Means and What to Watch

The bottom line is that Treasury’s AI Innovation Series marks a move from diagnosis to deployment. It ties back to Treasury’s March 2024 cybersecurity work, its December 2024 financial services AI report, its February 2026 lexicon and sector risk framework, FSOC’s December 2025 AI working group, Treasury’s September 2025 internal AI strategy, the GAO’s May 2025 oversight report, the FSB’s November 2024 and October 2025 systemic risk analyses, the SEC’s emerging principles based AI oversight posture, and the administration’s July 2025 AI Action Plan.

What it means is that AI in finance is now being framed as a core issue of financial stability and economic security, not just a technology trend. The real test is whether Treasury turns this series into measurable governance outputs that help firms deploy AI more securely, supervisors regulate more intelligently, and the system reduce both innovation drag and systemic fragility at the same time. That test is an enforcement architecture question: whether the outputs remain voluntary frameworks (Responsible AI) or produce the kind of deterministic, auditable governance infrastructure that converts policy intent into operational accountability.


Frequently Asked Questions

What is Treasury’s AI Innovation Series and why does it matter for financial institutions?

The AI Innovation Series is a public private initiative launched by FSOC and Treasury’s AI Transformation Office in March 2026 to convene regulators, financial institutions, and technology firms across four roundtables. It matters because Treasury now frames AI adoption as a financial stability requirement, not just a technology option, meaning institutions that lag on governed AI deployment face resilience risk in Treasury’s assessment.

What resources has Treasury published to help financial firms manage AI risk?

Treasury released two sector specific tools in February 2026: an AI Lexicon that standardizes definitions across regulatory, technical, and business functions, and a Financial Services AI Risk Management Framework that adapts the NIST AI RMF to financial services. Both were developed through the Artificial Intelligence Executive Oversight Group in partnership with industry and federal and state regulatory partners.

How does the FSOC 2025 Annual Report connect to Treasury’s AI roundtables?

The FSOC 2025 Annual Report established an AI Working Group with a mandate to identify high value AI use cases for member agencies and provide a forum for public private dialogue on regulatory impediments to responsible adoption. Treasury’s Innovation Series is the public implementation vehicle for that mandate, translating FSOC’s institutional directive into operational convenings.

What systemic risks has the Financial Stability Board identified from AI adoption in finance?

The FSB’s November 2024 report identified four categories of AI related vulnerability with potential to increase systemic risk: third party dependencies and service provider concentration, market correlations from widespread use of similar models and training data, cyber vulnerabilities from AI enabled attacks, and model risk compounded by limited explainability and opaque training data. Its October 2025 follow up found that most authorities remain at an early monitoring stage.


Collected Sources

  1. U.S. Department of the Treasury, “Treasury Launches the Artificial Intelligence (AI) Innovation Series,” Press Release sb0421, March 20, 2026. https://home.treasury.gov/news/press-releases/sb0421
  2. U.S. Department of the Treasury, “U.S. Department of the Treasury Releases Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector,” Press Release jy2212, March 27, 2024. https://home.treasury.gov/news/press-releases/jy2212
  3. U.S. Department of the Treasury, “Treasury Releases Report on the Uses, Opportunities, and Risks of Artificial Intelligence in Financial Services,” Press Release jy2760, December 19, 2024. https://home.treasury.gov/news/press-releases/jy2760
  4. U.S. Department of the Treasury, “Treasury Releases Two New Resources to Guide AI Use in the Financial Sector,” Press Release sb0401, February 19, 2026. https://home.treasury.gov/news/press-releases/sb0401
  5. U.S. Department of the Treasury, “Treasury Announces Public-Private Initiative to Strengthen Cybersecurity and Risk Management for AI,” Press Release sb0395, February 18, 2026. https://home.treasury.gov/news/press-releases/sb0395
  6. U.S. Department of the Treasury, “U.S. Department of the Treasury’s AI Strategy for OMB Memorandum M-25-21,” September 2025. https://home.treasury.gov/system/files/136/Treasury-AI-Strategy.pdf
  7. Financial Stability Oversight Council, “FSOC 2025 Annual Report,” Press Release sb0334, December 11, 2025. https://home.treasury.gov/news/press-releases/sb0334
  8. Financial Stability Oversight Council, “FSOC 2025 Annual Report” (full report), December 2025. https://home.treasury.gov/system/files/261/FSOC2025AnnualReport.pdf
  9. Financial Stability Board, “The Financial Stability Implications of Artificial Intelligence,” November 14, 2024. https://www.fsb.org/2024/11/the-financial-stability-implications-of-artificial-intelligence/
  10. Financial Stability Board, “Monitoring Adoption of Artificial Intelligence and Related Vulnerabilities in the Financial Sector,” October 10, 2025. https://www.fsb.org/2025/10/monitoring-adoption-of-artificial-intelligence-and-related-vulnerabilities-in-the-financial-sector/
  11. U.S. Securities and Exchange Commission, “Remarks at Financial Stability Oversight Council Artificial Intelligence Innovation Series Roundtable on Strategy and Governance Principles,” Chair Paul S. Atkins, March 4, 2026. https://www.sec.gov/newsroom/speeches-statements/atkins-remarks-at-financial-stability-oversight-council-artificial-intelligence-innovation-series-roundtable-030426
  12. U.S. Government Accountability Office, “Artificial Intelligence: Use and Oversight in Financial Services,” GAO-25-107197, May 19, 2025. https://www.gao.gov/products/gao-25-107197
  13. The White House, “Winning the Race: America’s AI Action Plan,” July 23, 2025. https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf
  14. U.S. Department of the Treasury, “Treasury and Artificial Intelligence” (resource hub). https://home.treasury.gov/policy-issues/financial-markets-financial-institutions-and-fiscal-service/treasury-and-artificial-intelligence
  15. U.S. Department of the Treasury, “READOUT: Financial Stability Oversight Council Meeting on March 25, 2026,” Press Release sb0423, March 25, 2026. https://home.treasury.gov/news/press-releases/sb0423

Share this:

  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Facebook (Opens in new window) Facebook
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

Filed Under: AI Artificial Intelligence, AI Governance, AI Thought Leadership, Business, Business Networking, Conferences & Education, Policy & Research, Thought Leadership Tagged With: AI Governance, AI provider plurality, AI risk management framework, Checkpoint-Based Governance, concentration risk, Factics, financial services AI, financial stability, Financial Stability Board, FSOC, GAO AI report, GOPEL, Responsible AI, SEC AI oversight, three-tier governance distinction, Treasury AI Innovation Series, White House AI Action Plan

Reader Interactions

Leave a Reply Cancel reply

You must be logged in to post a comment.

Primary Sidebar

Buy the eBook on Amazon

Multi-AI Governance

HAIA-RECCLIN Reasoning and Dispatch Third Edition free white paper promotional image with 3D book mockup and download button, March 2026, basilpuglisi.com

SAVE 25% on Governing AI, get it Publisher Direct

Save 25% on Digital Factics X, Publisher Direct

Digital Factics X

For Small Business

Facebook Groups: Build a Local Community Following Without Advertising Spend

Turn Google Reviews Smarter to Win New Customers

Save Time with AI: Let It Write Your FAQ Page Draft

Let AI Handle Your Google Profile Updates

How to Send One Customer Email That Doesn’t Get Ignored

Keep Your Google Listing Safe from Sneaky Changes

#SMAC #SocialMediaWeek

Basil Social Media Week

Legacy Print:

Digital Factics: Twitter

Digital Ethos Holiday Networking

Basil Speaking for Digital Ethos
RSS Search

@BasilPuglisi Copyright 2008, Factics™ BasilPuglisi.com, Content & Strategy, Powered by Factics & AI,

%d