Chapter 1 of 3  ·  From AI Fantasy to Real Moats: A State of the Union for CMOs, by a CMO
A Swim Club Series: From Experiments to System

From AI Fantasy
to Real Moats

A State of the Union for CMOs, by a CMO

Updated 14 April 2026  ·  v1.4
Intent

This guide is for CMOs and marketing leaders who are being asked to “have an AI strategy” on top of everything else.

My goal is to give you a candid, operator-grade view of what AI can and can’t do for marketing right now and what’s actually worth your time. Read this and you'll leave more confident in your AI readiness because you already know more than you think.

You should come away with:

  • A clear sense of what’s real vs. what’s theatre in AI for marketing today.
  • A simple way to think about assistants, workflows, and agents without getting lost in buzzwords.
  • A grounded view of where AI can already help a team like yours, and where it’s still fantasy or uneconomical.
  • A practical picture of what a sensible first phase looks like—clarity, knowledge, and workflows, not ‘AI transformation.’
Coming soon
Get notified about the free video series launching in May 2026
Enter your email and you’ll be the first to know when it drops.
About me

Where I’m coming from

Most marketing leaders I speak to aren’t waking up thinking, “I know exactly how to approach AI.” Many are asking “am I behind?” or saying “I don’t know what I don’t know!” That was me 18 months ago. So I set out to cut through it: separate AI reality from the hype, rather than rushing into ‘agentifying’ everything or pretending you can replace 50% of your team with bots.

I stepped away from my CMO role to do this properly and I get to work with peers at .monks, founders of AI startups, and marketing practitioners. Most importantly, I’m building and applying this every day: to my own business, to post-PMF to early mid-stage startups, and mature marketing teams.

The thing that came out of all this work — that I keep returning to — is that the problem isn’t AI knowledge. It’s that most marketing orgs don’t have a stable foundation underneath. I wanted to name that. Not because I needed another framework, but because you can’t build toward something you can’t point to. Brand OS is what I landed on: the operating layer that tells your whole org what you stand for, how you sound, and how to make decisions. It’s what makes everything else compound.

What drives me is getting us marketers back to what good marketing actually feels like: great storytelling, customer experiences that just work, and the possibility of having more time to think deeply, create, and spend more time with humans.

As you read this
A. I will name where I think AI is useful right now for marketing.
B. I will name where I think it is overhyped, too early, or simply uneconomical.
C. I will not tell you that “put agents everywhere” is around the corner for a typical marketing org.
D. I will not tell you that you should reduce your team by large percentages.

—AMG

Anne-Marie Goulet
02 / The Macro Shift Around Us

The AI environment is shifting fast. Here’s what I’m seeing.

Let’s first zoom out to get a point of reference of what’s happening in the world around us. I originally wrote this guide in December 2025 and since then I’ve made meaningful updates because the AI environment is shifting fast, and it’s hard to keep up. Stay tuned to this URL because I plan to update this on a 1–2 month cadence. I found that Gartner’s 2025 AI Hype Cycle is a useful shorthand for what’s actually happening:

  • 1
    AI Agents are at peak hype.

    They sit at the top of the “inflated expectations” curve—everyone’s promising autonomous everything. Gartner’s view (which I share): for many marketing and GTM jobs, it may never be economically rational to fully automate with agents. The cost, complexity, and trust bar are just too high for most teams.

  • 2
    Gen-AI hangover is here.

    We’re past the first wave of excitement. Most teams have run a bunch of generative-AI pilots; a few work, many don’t. A Writer survey of 2,400 global leaders (January 2026) puts a number on it: 48% call AI adoption a “massive disappointment”—up from 34% the year before. In other words: slick Gen AI demos are easy. Durable Gen AI value is hard, and it’s being decided by cost, plumbing, and data quality.

    What’s separating the keepers from the experiments now isn’t how impressive the demo looks, it’s whether: the unit economics make sense, the integration work is realistic, and the underlying data is clean and trustworthy enough to use in real decisions.

  • 3
    The transformation layer is the missing piece.

    There’s a lot of well-meaning action being taken right now—teams building workflows, marketers creating skills in Claude, leaders rolling out new tools. That work matters. But what I keep seeing as the missing layer above all of it is the human one: your team’s mindset and readiness, and their actual ability to work together to make transformation happen.

    Tools don’t transform organizations. People do. And we’re going to dig into that together.

  • 4
    The “boring” stuff is what reaches the plateau.

    The capabilities Gartner expects to actually hit the plateau of productivity by 2029 are: AI Engineering, Responsible AI, and AI-ready data and knowledge.

The long-term value for you is in boring things like data quality, governance, and workflows—not in throwing autonomous agents at everything.
Quick Check

Where are you on the curve?

Four quick questions. No wrong answers, just a candid read of where your team actually sits.

1. When leadership asks "what are we doing with AI?", your real answer is...

2. How did your team’s last AI tool get added?

3. If I asked three leaders at your company "who is your ICP?", I’d get...

4. Where does your company’s brand story, ICP, and messaging actually live?

03 / The Current State of AI in Marketing

Fact vs. Fantasy

No real-world AI is 100% hands-free magic. Here’s a candid version of where AI technology is today:

Tier 1

Assistants

When I wrote this in late 2025, I treated the major AI models as roughly interchangeable thinking partners. That’s no longer accurate and undersells the decision you’re actually making.

The models have differentiated. But more importantly, they’ve differentiated at the layer that matters for how teams actually operate: connectivity. A thinking partner is only as good as what it knows. And what it knows depends entirely on what you’ve given it access to — your documents, your workflows, your knowledge base, your calendar, voice of customer (like Gong and others), review sites, and more.

The question worth asking isn’t “which AI is smartest?” It’s “which AI actually connects to how my team works?”

Once a tool is embedded in your operations, switching starts to feel structural, not just inconvenient. That’s the infrastructure shift and it’s what separates a tool you use from a tool you run on.

Custom assistants are also now buildable natively inside most platforms — no third-party tools required. If you have a narrow, well-defined job (a brief template, a research synthesizer, a QBR assistant), the barrier to building something useful has never been lower.

The drafting and reasoning is comparable. What differs is depth, breadth, and fit.
Claude (Anthropic)
Best for depth of context and brand voice. Load your business context and Brand OS once; it reasons with what you give it. Strongest when output quality and consistency matter. This is what I use.
ChatGPT (OpenAI)
Best for breadth of built-in features. Image generation, web browsing, voice, and a broad plugin ecosystem — one tool that covers a lot of ground without extra setup.
Gemini (Google)
Embedded directly inside Gmail, Docs, and Sheets — no window switching. If your team’s work happens inside Google, Gemini meets them there. (Claude can work with Google content too; what’s specific to Gemini is the native UI, not the access.)
All three handle everyday drafting, summarizing, and Q&A with comparable competence. The gap shows up when you need depth of context, feature breadth, or tight ecosystem integration.
Tier 2 · Most Near-Term Value

Workflows are where most near-term value lives.

A workflow follows a fixed path you designed. They run the same steps in the same order every time, with some AI under the hood.

Example: “When X happens, do A → B → C.” The system runs the recipe, but it’s usually dumb about edge cases and tightly scripted. “When a QBR is 14 days away, pull this data, draft this deck, notify this person.”

The foundation is the same. The integration is the differentiator.
Claude
Best for knowledge-heavy workflows. Load your Brand OS, docs, and customer context once — every output that follows has that context baked in without re-briefing from scratch.
ChatGPT
Best for cross-app automation. Pair it with Zapier or its native connectors to stitch AI into tools across your stack — CRM, Slack, project management, and more.
Gemini
Best for Google-native teams. Works directly inside Docs, Sheets, Gmail, and Meet with no migration. If your team already lives in Google, this is lowest friction.
All three can draft, summarize, and run repeatable tasks. What varies is where each integrates natively — pick the one closest to where your team’s work already happens.
Tier 3

Agents: real in narrow contexts, still early for most of GTM.

An agent can adapt. It reads state, makes decisions, calls tools, and loops until a goal is complete. Think: “Research this account, draft a personalized email, check for a reply, and follow up in 3 days if no response.”

This is real. But for most marketing and GTM teams, building reliable, auditable, on-brand agents requires an investment in plumbing, Brand OS, and oversight that most orgs aren’t ready for yet.

Agent or workflow? The real answer.
Claude (Anthropic)
Workflow, not agent for most CMO use cases. Agentic features (Computer Use, autonomous loops) exist but aren’t production-ready for most teams. Best positioned for bounded, multi-step tasks with a human in the loop.
ChatGPT (OpenAI)
Workflow. Tasks and plugins feel automated, but the loop is scripted, not self-directed. True agentic ChatGPT (Operator) is a developer product — not what most marketing teams are running.
Gemini (Google)
Workflow. Gemini in Workspace is a workflow tool. Google’s actual agent products require enterprise setup and developer work — not what a CMO team accesses today.
Across all three: if you’re not writing code and setting up infrastructure, you’re building workflows — not agents. That’s not a limitation. It’s where most of the near-term value actually lives.
04 / The Knowledge Layer

The question isn’t which tool. It’s what your tools actually know.

Every conversation about AI adoption eventually arrives at the same question: which surface does your team live in? Slack? Email? Notion? Google Workspace?

It’s an incomplete question.

The surface is one side to the coin: it’s an access point. The AI integrated into your surface is only as useful as what it can actually reason over.

The whole question is: if you gave your AI everything your team knows, what would it have? And is what it has worth having? Then, how do we make it accessible inside the way a human actually works.

For most marketing teams, the ICP lives in a slide deck from 18 months ago. Positioning has been debated in a Slack thread nobody can find. User research is in a folder three people know exists. All of it exists somewhere — in pieces, owned by individuals rather than the organization.

That’s not an AI problem. That’s a knowledge architecture problem.

When Shopify made AI mandatory across the company, they weren’t chasing a new tool. They were solving a knowledge problem: how do you get your AI to reason over everything your organization actually knows, instead of starting from scratch every time? That question applies whether your team is writing code or writing campaigns.

AI value isn’t linear. It scales directly with the quality and accessibility of the knowledge underneath it.

The compounding insight: A team with a strong knowledge layer and basic tooling will consistently outperform a team with premium AI tools and scattered, disorganized knowledge. Every time. The teams getting compounding returns from AI aren’t the ones who picked the right surface. They’re the ones who built a knowledge layer their AI can reason over and keep it current. So the equation is surface + knowledge layer = ♥

This is why I developed something I’m referring to as “Brand OS”: a way to think about and build your knowledge architecture framework. When your ICP, your positioning, your customer insights, and your proof points live in one governed place that your AI can access, everything downstream compounds. That’s the moat. Not “we use AI” — but “our AI knows everything we know.”

The practical question to start with

If you gave your AI a brief to write for your next campaign, what would it actually have to work with? What’s documented vs. living in someone’s head? The answer to that question tells you more about your AI readiness than any tool assessment ever will.

05 / We’ve Seen This Movie Before

If we don’t fix the fundamentals now, AI just becomes the next costume for the same problems.

We’ve been here before. The pattern is consistent across every major platform shift:

  • 1
    In the paid click era,

    search and PPC let us buy attention and measure everything. We optimised clicks and targeting and let real customer understanding and brand building fade.

  • 2
    In the automation era,

    nurture engines and marketing automation platforms took over. We optimised sends, MQLs, and sequences, and often de-optimised usefulness, clarity, and trust.

  • 3
    Now in the AI era,

    models, assistants, and agents promise the same thing: more scale, more efficiency. The temptation is the same too: treat AI as a volume engine and skip the fundamentals and system design underneath.

new toy → short-term gains and hype → long-term erosion when we skip the basics.
06 / This is Why The Fundamentals Still Matter

From my vantage point, most failed AI marketing pilots are actually fundamentals problems.

When AI pilots fail, the root cause is almost never the AI. It’s a fundamentals problem. Before building anything new on top, ask yourself:

  • Have you aligned on why you even need AI today? Have you distilled the core pain/friction you need to solve for? What's not working? Start with the problem, not the possibility.
  • Do you know how to measure you’ve been successful on the other side of this shift? For example, how do you measure better quality of positioning? Or better quality of campaign outputs?
  • Do you have the leadership buy-in from top-down? Or has the AI strategy been like a “let ‘er rip” approach?
  • Can everyone in your company state who the customer is? Is it fragmented? Shallow?
  • Is your message flat/middling/confusing? Is it strong, but your people can’t find it or are unsure how to leverage it?

So my order of operations is:

AOne, singular company ambition.
BKnow the customer.
CReach them where they actually are.
DSay something that resonates.
EDeliver what you promised.
FScale clarity & judgment with AI.
Fundamentals first. Systems second. AI third. Most of the industry has that flipped.
07 / The Solution: Brand OS as Your Moat

If AI is everywhere, what’s actually yours?

Everyone can access similar models and tools. “We also use AI” is not a differentiator.

Your moat is your Brand Operating System (Brand OS):

Brand OS Your moat is built in two layers.

Knowledge Base

Everything your team knows, believes, and can prove — in one governed place. Your Story lives here too.

Customer Truth
  • ICP, segments, and proof
  • Use cases and objections
  • Product truth and proof points
Brand Story
  • One narrative, one voice
  • Messaging by segment
  • What you say no to

Brand OS System

The workflows, guardrails, and tools that operationalize the knowledge. How the story gets into the work.

Brand OS sits at the intersection of three things: your customer truth (who you’re for, what they need, what makes you the right choice), your brand story (the one narrative that holds across every channel, role, and format), and your operational system (the workflows, guardrails, and AI-readable structure that makes the story usable at scale).

When all three exist and connect, your team stops starting from scratch. Your AI stops hallucinating your value prop. Your new hires get up to speed in days, not months. Your sales team tells the same story as your content team.

Brand OS isn’t a brand strategy. It’s infrastructure: the foundation everything else compounds on top of.
08 / A Marketer With vs. Without Brand OS

Imagine the same marketer, same ask:

“We need a campaign for mid-market ops leaders who are scaling past 50 people and hitting process ceilings — focus on handoff failures and what it costs them.”

Without Brand OS
🔍 Dig through old decks for anything “mid-market.”
💬 DM three people for “the latest messaging”—get three different answers.
🤷 Ask sales for recent objections—get anecdotes, not data.
🧩 Paste a soup of slides and notes into AI and hope it synthesises.
AI produces something — but it’s stitching together inconsistent inputs. It can’t know which story is true, because you haven’t decided.
With Brand OS
Open a brief template that auto-pulls the canonical “mid-market ops leader” definition from the Brand OS.
🎯 Surfaces top pains and proof points for that segment from real data.
🔗 Links to the current “handoff failures” narrative and proof points.
🤖 AI is constrained to: this ICP, this promise, this proof, this tone.
The first draft is 70–80% right because it’s built on one source of truth — not twelve half-remembered ones.
Same marketer. Same AI. The difference in quality comes from whether the Brand OS exists.
09 / The AI Landscape That Actually Matters to You

Once you see Brand OS as the base, the AI landscape becomes less overwhelming.

There are five distinct ways AI is entering your world as a marketing leader. They have different implications, different urgency, and different dependencies. The mistake most teams make is treating them all the same — buying tools from bucket III before they’ve understood what’s happening in bucket I.

1
AI in Your Buyers’ Journey Most Underrated

Google AI Overviews, buyers using ChatGPT/Claude to compare you vs. alternatives, AI answers shaping how your category is described before a human ever visits your site.

What this means: Your Brand OS is now training data whether you planned it that way or not. The companies with a clear, documented, consistent story are getting summarised accurately by AI. Everyone else is getting described by their competitors.

2
AI Embedded in Your Tools Already Here

Salesforce Einstein, HubSpot AI, Google Workspace Gemini, Notion AI — the tools you already pay for are getting AI features layered in.

What this means: Ignore the upsell pressure. Evaluate based on actual workflow fit, not feature lists. And remember: if your data in those tools is messy, the AI output will be too.

3
AI Assistants & Custom Tools You Build High Leverage

Claude, ChatGPT, Gemini as thinking partners. Custom GPTs, Claude Projects, Cowork sessions tailored to your Brand OS.

What this means: This is where most teams get their first real ROI. Start with a generic assistant pointed at your Brand OS. Then build narrow, specific tools for your team’s highest-friction workflows.

4
AI-Native Vendors Replacing Point Solutions Evaluate Carefully

New tools built AI-first: Jasper, Writer, Synthesia, and dozens of others positioning to replace your existing martech stack — or at least your agency spend.

What this means: Some of these will win. Many won’t. Don’t add to your stack without a clear job-to-be-done and a named owner. The graveyard of AI subscriptions is full of great demos.

5
Full AI Agents & Automation Still Early for Most

Autonomous agents that run multi-step processes without human oversight. Interesting in narrow, well-defined contexts. Risky, expensive, and often unnecessary for most GTM work right now.

What this means: Don’t let vendor FOMO drive you here before you’ve done the foundational work. Bucket 5 only compounds on top of Buckets 0–4.

10 / How Do You GSD: Building the Capability

Getting there isn’t about which tools you adopt first. It’s about what your team needs to be ready to do at each stage.

Step 1 Document the foundation
Get the knowledge out of people’s heads. Who you’re for, what you promise, why you win. This is Brand OS — not a tool purchase, a decision. The first concrete thing your team can actually act from.
Step 2 Distribute the judgement
Brand OS only works if your team can use it. The act of building it together matters as much as the document itself, because the process is what creates shared standards. Now your team knows what good looks like, and they can make calls without escalating.
Step 3 Find the high-friction, high-value work
Look at what your team does repeatedly that follows a pattern. Campaign briefs. QBR prep. Post-demo follow-up. Map those workflows. These are the places worth systematising, but only because you’ve documented the standard first.
Step 4 Let tools serve the capability
Now AI has something real to work from. Better drafts, faster execution, more consistent output, because your Brand OS is the reference point and your team has the judgement to direct and critique what AI produces. Tools amplify the capability. They don’t substitute for it.
11 / The Capability Staircase

The staircase isn’t about tools. It’s about whether your team knows how to make the right call — at every level, on the easy decisions and the hard ones.

Most companies measure AI maturity by tooling. How many integrations. How many agents. How many automations running. But that’s the wrong unit of measurement.

The right question is: what can your team decide, produce, and sustain, confidently, at every level?

That’s a capability question. And it has a progression:

Dependent

Output quality depends on who’s available. A great brief happens when the right person writes it. Messaging shifts meeting to meeting. The brand lives in people’s heads, not in shared infrastructure.

Consistent

The team draws from a shared foundation. Brand OS exists. Decisions don’t require escalation because the thinking is already documented. AI has something real to work from.

Leveraged

Workflows run reliably. The team produces more without proportionally more effort. Leadership is a quality check, not a dependency.

Compounding

Every campaign, every piece of content, every customer interaction makes the system stronger. The team isn’t just executing. They’re developing the judgement to make everything better as they go.

The insight most AI-forward companies miss: a team at the Consistent stage, using modest tooling, will outperform a Dependent team with a full agent stack. Every time.

Brand OS is what moves you from Dependent to Consistent. That’s the unlock. Everything above it follows.

The progression above describes what your team produces. Here’s what it doesn’t say.

The real work isn’t in the tools. It isn’t in the workflows. It isn’t even in the Brand OS document itself.

It’s in the humans who build it.

When your team goes through the process of defining who you’re for, what you promise, and why you win — they develop something that can’t be templated: the instinct to know what good looks like here, in this company, for this brand. That instinct is what makes AI useful. Not the other way around.

The most valuable capability you can build isn’t prompt-writing. It’s the ability to look at an AI output and say: this is almost right, but here’s exactly what’s off and why. Sharp critique. Specific feedback. Standards that exist somewhere other than the founder’s head.

AI outputs improve in direct proportion to the quality of human judgement applied to them. A team with clear standards and a shared foundation will consistently outperform a team with better tools and blurry ones. Every time.

I’ve seen this in my own work. Claude went from producing writing that was fine to producing writing I’d put my name on, not because the model got smarter, but because I got more precise about what I wanted and why. The feedback loop is the work. The compounding happens there.

Brand OS gives your whole team that same reference point. A shared vocabulary for what good sounds like. A foundation for critique that doesn’t require the founder in the room.

That’s what makes the capability compound. Not the tools. The judgement.

12 / What I’d Say No To

If I were sitting in your seat, here’s what I’d say no to in the next 6–12 months.

01

“Agents Everywhere” or “I need to cut half my marketing team”

I would not sign up for any plan that implies autonomous agents running large chunks of GTM, or replacing whole roles in the next couple of quarters. The economics, data quality, and trust just aren’t there in most marketing or commercial orgs yet.

02

Tool-Led “Strategy”

I would say no to vendor-led roadmaps masquerading as strategy, and deals driven by FOMO (“our competitors just bought X” or “we’re late to the AI table”). Tools don’t create capability — they amplify it. If your team doesn’t have the foundation, the judgement, or the shared standards to direct AI outputs, adding more tooling just produces more polished mediocrity, faster. Build the capability first. Then decide what tools serve it.

03

Dozens of Unconnected Pilots

I would cap the number of active AI experiments and kill anything that doesn’t map to a clear workflow, doesn’t have a named owner, or can’t be tied to a concrete “job to be done” in the system. “We tried 17 things and learned nothing we can reuse” is the fastest way to burn trust.

04

AI Work That Ignores Data and Definitions

I would not green-light AI projects that rely on dirty data we already don’t trust, depend on definitions that leaders don’t agree on, or assume “the model will figure it out” instead of us doing our homework. If the humans don’t agree on what “ICP” means, the AI definitely won’t.

05

Pilots Without a Clear Human Outcome

I’d say no to experiments where we can’t articulate how this makes someone’s day meaningfully better (time saved, less rework, better decisions). Your team’s attention is your scarcest resource. Spend it where there’s a real shot at better work, not just cooler screenshots.

13 / What “Better” Should Feel Like

If this works, your day feels different.

When this works, your day shifts. Less time chasing context, more time on strategic, creative decisions. Briefs stop needing three revision rounds. Sales, product, marketing, and CS tell one coherent story. AI feels like a quiet force multiplier, not another fire drill.

Brand OS gives you the foundation. What you’re really buying is a different operating posture:

  • Marketing as a strategic engine, not an emergency service.
  • Growth that feels deliberate, not accidental.
  • Leadership operating as one system rather than five disconnected execs.
  • A rubric for evaluating AI tools against your system — not the other way around.
The tools will keep changing. The acronyms will keep multiplying. What doesn’t change is the need to really know your customer, the need for a clear, shared story, and the need for a system that lets smart people do their best work.
What better looks like
Let’s Work Together

Ready to build your Brand OS?

If this resonated and you want to talk about what Phase 1 could look like for your team, I’m taking on a small number of engagements this quarter.

Coming Soon:
Chapter 2 of 3 Build Your Brand OS