From AI Fantasy
to Real Moats
A State of the Union for CMOs, by a CMO
This guide is for CMOs and marketing leaders who are being asked to “have an AI strategy” on top of everything else.
My goal is to give you a candid, operator-grade view of what AI can and can’t do for marketing right now and what’s actually worth your time. Read this and you'll leave more confident in your AI readiness because you already know more than you think.
You should come away with:
- A clear sense of what’s real vs. what’s theatre in AI for marketing today.
- A simple way to think about assistants, workflows, and agents without getting lost in buzzwords.
- A grounded view of where AI can already help a team like yours, and where it’s still fantasy or uneconomical.
- A practical picture of what a sensible first phase looks like—clarity, knowledge, and workflows, not ‘AI transformation.’
Where I’m coming from
Most marketing leaders I speak to aren’t waking up thinking, “I know exactly how to approach AI.” Many are asking “am I behind?” or saying “I don’t know what I don’t know!” That was me 18 months ago. So I set out to cut through it: separate AI reality from the hype, rather than rushing into ‘agentifying’ everything or pretending you can replace 50% of your team with bots.
I stepped away from my CMO role to do this properly and I get to work with peers at .monks, founders of AI startups, and marketing practitioners. Most importantly, I’m building and applying this every day: to my own business, to post-PMF to early mid-stage startups, and mature marketing teams.
The thing that came out of all this work — that I keep returning to — is that the problem isn’t AI knowledge. It’s that most marketing orgs don’t have a stable foundation underneath. I wanted to name that. Not because I needed another framework, but because you can’t build toward something you can’t point to. Brand OS is what I landed on: the operating layer that tells your whole org what you stand for, how you sound, and how to make decisions. It’s what makes everything else compound.
What drives me is getting us marketers back to what good marketing actually feels like: great storytelling, customer experiences that just work, and the possibility of having more time to think deeply, create, and spend more time with humans.
—AMG

The AI environment is shifting fast. Here’s what I’m seeing.
Let’s first zoom out to get a point of reference of what’s happening in the world around us. I originally wrote this guide in December 2025 and since then I’ve made meaningful updates because the AI environment is shifting fast, and it’s hard to keep up. Stay tuned to this URL because I plan to update this on a 1–2 month cadence. I found that Gartner’s 2025 AI Hype Cycle is a useful shorthand for what’s actually happening:
- 1AI Agents are at peak hype.
They sit at the top of the “inflated expectations” curve—everyone’s promising autonomous everything. Gartner’s view (which I share): for many marketing and GTM jobs, it may never be economically rational to fully automate with agents. The cost, complexity, and trust bar are just too high for most teams.
- 2Gen-AI hangover is here.
We’re past the first wave of excitement. Most teams have run a bunch of generative-AI pilots; a few work, many don’t. A Writer survey of 2,400 global leaders (January 2026) puts a number on it: 48% call AI adoption a “massive disappointment”—up from 34% the year before. In other words: slick Gen AI demos are easy. Durable Gen AI value is hard, and it’s being decided by cost, plumbing, and data quality.
What’s separating the keepers from the experiments now isn’t how impressive the demo looks, it’s whether: the unit economics make sense, the integration work is realistic, and the underlying data is clean and trustworthy enough to use in real decisions.
- 3The transformation layer is the missing piece.
There’s a lot of well-meaning action being taken right now—teams building workflows, marketers creating skills in Claude, leaders rolling out new tools. That work matters. But what I keep seeing as the missing layer above all of it is the human one: your team’s mindset and readiness, and their actual ability to work together to make transformation happen.
Tools don’t transform organizations. People do. And we’re going to dig into that together.
- 4The “boring” stuff is what reaches the plateau.
The capabilities Gartner expects to actually hit the plateau of productivity by 2029 are: AI Engineering, Responsible AI, and AI-ready data and knowledge.
The long-term value for you is in boring things like data quality, governance, and workflows—not in throwing autonomous agents at everything.
Where are you on the curve?
Four quick questions. No wrong answers, just a candid read of where your team actually sits.
Fact vs. Fantasy
No real-world AI is 100% hands-free magic. Here’s a candid version of where AI technology is today:
Assistants
When I wrote this in late 2025, I treated the major AI models as roughly interchangeable thinking partners. That’s no longer accurate and undersells the decision you’re actually making.
The models have differentiated. But more importantly, they’ve differentiated at the layer that matters for how teams actually operate: connectivity. A thinking partner is only as good as what it knows. And what it knows depends entirely on what you’ve given it access to — your documents, your workflows, your knowledge base, your calendar, voice of customer (like Gong and others), review sites, and more.
The question worth asking isn’t “which AI is smartest?” It’s “which AI actually connects to how my team works?”
Once a tool is embedded in your operations, switching starts to feel structural, not just inconvenient. That’s the infrastructure shift and it’s what separates a tool you use from a tool you run on.
Custom assistants are also now buildable natively inside most platforms — no third-party tools required. If you have a narrow, well-defined job (a brief template, a research synthesizer, a QBR assistant), the barrier to building something useful has never been lower.
Workflows are where most near-term value lives.
A workflow follows a fixed path you designed. They run the same steps in the same order every time, with some AI under the hood.
Example: “When X happens, do A → B → C.” The system runs the recipe, but it’s usually dumb about edge cases and tightly scripted. “When a QBR is 14 days away, pull this data, draft this deck, notify this person.”
Agents: real in narrow contexts, still early for most of GTM.
An agent can adapt. It reads state, makes decisions, calls tools, and loops until a goal is complete. Think: “Research this account, draft a personalized email, check for a reply, and follow up in 3 days if no response.”
This is real. But for most marketing and GTM teams, building reliable, auditable, on-brand agents requires an investment in plumbing, Brand OS, and oversight that most orgs aren’t ready for yet.
The question isn’t which tool. It’s what your tools actually know.
Every conversation about AI adoption eventually arrives at the same question: which surface does your team live in? Slack? Email? Notion? Google Workspace?
It’s an incomplete question.
The surface is one side to the coin: it’s an access point. The AI integrated into your surface is only as useful as what it can actually reason over.
The whole question is: if you gave your AI everything your team knows, what would it have? And is what it has worth having? Then, how do we make it accessible inside the way a human actually works.
For most marketing teams, the ICP lives in a slide deck from 18 months ago. Positioning has been debated in a Slack thread nobody can find. User research is in a folder three people know exists. All of it exists somewhere — in pieces, owned by individuals rather than the organization.
That’s not an AI problem. That’s a knowledge architecture problem.
When Shopify made AI mandatory across the company, they weren’t chasing a new tool. They were solving a knowledge problem: how do you get your AI to reason over everything your organization actually knows, instead of starting from scratch every time? That question applies whether your team is writing code or writing campaigns.
AI value isn’t linear. It scales directly with the quality and accessibility of the knowledge underneath it.
The compounding insight: A team with a strong knowledge layer and basic tooling will consistently outperform a team with premium AI tools and scattered, disorganized knowledge. Every time. The teams getting compounding returns from AI aren’t the ones who picked the right surface. They’re the ones who built a knowledge layer their AI can reason over and keep it current. So the equation is surface + knowledge layer = ♥
This is why I developed something I’m referring to as “Brand OS”: a way to think about and build your knowledge architecture framework. When your ICP, your positioning, your customer insights, and your proof points live in one governed place that your AI can access, everything downstream compounds. That’s the moat. Not “we use AI” — but “our AI knows everything we know.”
If you gave your AI a brief to write for your next campaign, what would it actually have to work with? What’s documented vs. living in someone’s head? The answer to that question tells you more about your AI readiness than any tool assessment ever will.
If we don’t fix the fundamentals now, AI just becomes the next costume for the same problems.
We’ve been here before. The pattern is consistent across every major platform shift:
- 1In the paid click era,
search and PPC let us buy attention and measure everything. We optimised clicks and targeting and let real customer understanding and brand building fade.
- 2In the automation era,
nurture engines and marketing automation platforms took over. We optimised sends, MQLs, and sequences, and often de-optimised usefulness, clarity, and trust.
- 3Now in the AI era,
models, assistants, and agents promise the same thing: more scale, more efficiency. The temptation is the same too: treat AI as a volume engine and skip the fundamentals and system design underneath.
new toy → short-term gains and hype → long-term erosion when we skip the basics.
From my vantage point, most failed AI marketing pilots are actually fundamentals problems.
When AI pilots fail, the root cause is almost never the AI. It’s a fundamentals problem. Before building anything new on top, ask yourself:
- Have you aligned on why you even need AI today? Have you distilled the core pain/friction you need to solve for? What's not working? Start with the problem, not the possibility.
- Do you know how to measure you’ve been successful on the other side of this shift? For example, how do you measure better quality of positioning? Or better quality of campaign outputs?
- Do you have the leadership buy-in from top-down? Or has the AI strategy been like a “let ‘er rip” approach?
- Can everyone in your company state who the customer is? Is it fragmented? Shallow?
- Is your message flat/middling/confusing? Is it strong, but your people can’t find it or are unsure how to leverage it?
So my order of operations is:
Fundamentals first. Systems second. AI third. Most of the industry has that flipped.
If AI is everywhere, what’s actually yours?
Everyone can access similar models and tools. “We also use AI” is not a differentiator.
Your moat is your Brand Operating System (Brand OS):
Knowledge Base
Everything your team knows, believes, and can prove — in one governed place. Your Story lives here too.
- ICP, segments, and proof
- Use cases and objections
- Product truth and proof points
- One narrative, one voice
- Messaging by segment
- What you say no to
Brand OS System
The workflows, guardrails, and tools that operationalize the knowledge. How the story gets into the work.
Brand OS sits at the intersection of three things: your customer truth (who you’re for, what they need, what makes you the right choice), your brand story (the one narrative that holds across every channel, role, and format), and your operational system (the workflows, guardrails, and AI-readable structure that makes the story usable at scale).
When all three exist and connect, your team stops starting from scratch. Your AI stops hallucinating your value prop. Your new hires get up to speed in days, not months. Your sales team tells the same story as your content team.
Brand OS isn’t a brand strategy. It’s infrastructure: the foundation everything else compounds on top of.
Imagine the same marketer, same ask:
“We need a campaign for mid-market ops leaders who are scaling past 50 people and hitting process ceilings — focus on handoff failures and what it costs them.”
Same marketer. Same AI. The difference in quality comes from whether the Brand OS exists.
Once you see Brand OS as the base, the AI landscape becomes less overwhelming.
There are five distinct ways AI is entering your world as a marketing leader. They have different implications, different urgency, and different dependencies. The mistake most teams make is treating them all the same — buying tools from bucket III before they’ve understood what’s happening in bucket I.
Google AI Overviews, buyers using ChatGPT/Claude to compare you vs. alternatives, AI answers shaping how your category is described before a human ever visits your site.
What this means: Your Brand OS is now training data whether you planned it that way or not. The companies with a clear, documented, consistent story are getting summarised accurately by AI. Everyone else is getting described by their competitors.
Salesforce Einstein, HubSpot AI, Google Workspace Gemini, Notion AI — the tools you already pay for are getting AI features layered in.
What this means: Ignore the upsell pressure. Evaluate based on actual workflow fit, not feature lists. And remember: if your data in those tools is messy, the AI output will be too.
Claude, ChatGPT, Gemini as thinking partners. Custom GPTs, Claude Projects, Cowork sessions tailored to your Brand OS.
What this means: This is where most teams get their first real ROI. Start with a generic assistant pointed at your Brand OS. Then build narrow, specific tools for your team’s highest-friction workflows.
New tools built AI-first: Jasper, Writer, Synthesia, and dozens of others positioning to replace your existing martech stack — or at least your agency spend.
What this means: Some of these will win. Many won’t. Don’t add to your stack without a clear job-to-be-done and a named owner. The graveyard of AI subscriptions is full of great demos.
Autonomous agents that run multi-step processes without human oversight. Interesting in narrow, well-defined contexts. Risky, expensive, and often unnecessary for most GTM work right now.
What this means: Don’t let vendor FOMO drive you here before you’ve done the foundational work. Bucket 5 only compounds on top of Buckets 0–4.
Getting there isn’t about which tools you adopt first. It’s about what your team needs to be ready to do at each stage.
The staircase isn’t about tools. It’s about whether your team knows how to make the right call — at every level, on the easy decisions and the hard ones.
Most companies measure AI maturity by tooling. How many integrations. How many agents. How many automations running. But that’s the wrong unit of measurement.
The right question is: what can your team decide, produce, and sustain, confidently, at every level?
That’s a capability question. And it has a progression:
Dependent
Output quality depends on who’s available. A great brief happens when the right person writes it. Messaging shifts meeting to meeting. The brand lives in people’s heads, not in shared infrastructure.
Consistent
The team draws from a shared foundation. Brand OS exists. Decisions don’t require escalation because the thinking is already documented. AI has something real to work from.
Leveraged
Workflows run reliably. The team produces more without proportionally more effort. Leadership is a quality check, not a dependency.
Compounding
Every campaign, every piece of content, every customer interaction makes the system stronger. The team isn’t just executing. They’re developing the judgement to make everything better as they go.
The insight most AI-forward companies miss: a team at the Consistent stage, using modest tooling, will outperform a Dependent team with a full agent stack. Every time.
Brand OS is what moves you from Dependent to Consistent. That’s the unlock. Everything above it follows.
The progression above describes what your team produces. Here’s what it doesn’t say.
The real work isn’t in the tools. It isn’t in the workflows. It isn’t even in the Brand OS document itself.
It’s in the humans who build it.
When your team goes through the process of defining who you’re for, what you promise, and why you win — they develop something that can’t be templated: the instinct to know what good looks like here, in this company, for this brand. That instinct is what makes AI useful. Not the other way around.
The most valuable capability you can build isn’t prompt-writing. It’s the ability to look at an AI output and say: this is almost right, but here’s exactly what’s off and why. Sharp critique. Specific feedback. Standards that exist somewhere other than the founder’s head.
AI outputs improve in direct proportion to the quality of human judgement applied to them. A team with clear standards and a shared foundation will consistently outperform a team with better tools and blurry ones. Every time.
I’ve seen this in my own work. Claude went from producing writing that was fine to producing writing I’d put my name on, not because the model got smarter, but because I got more precise about what I wanted and why. The feedback loop is the work. The compounding happens there.
Brand OS gives your whole team that same reference point. A shared vocabulary for what good sounds like. A foundation for critique that doesn’t require the founder in the room.
That’s what makes the capability compound. Not the tools. The judgement.
If I were sitting in your seat, here’s what I’d say no to in the next 6–12 months.
“Agents Everywhere” or “I need to cut half my marketing team”
I would not sign up for any plan that implies autonomous agents running large chunks of GTM, or replacing whole roles in the next couple of quarters. The economics, data quality, and trust just aren’t there in most marketing or commercial orgs yet.
Tool-Led “Strategy”
I would say no to vendor-led roadmaps masquerading as strategy, and deals driven by FOMO (“our competitors just bought X” or “we’re late to the AI table”). Tools don’t create capability — they amplify it. If your team doesn’t have the foundation, the judgement, or the shared standards to direct AI outputs, adding more tooling just produces more polished mediocrity, faster. Build the capability first. Then decide what tools serve it.
Dozens of Unconnected Pilots
I would cap the number of active AI experiments and kill anything that doesn’t map to a clear workflow, doesn’t have a named owner, or can’t be tied to a concrete “job to be done” in the system. “We tried 17 things and learned nothing we can reuse” is the fastest way to burn trust.
AI Work That Ignores Data and Definitions
I would not green-light AI projects that rely on dirty data we already don’t trust, depend on definitions that leaders don’t agree on, or assume “the model will figure it out” instead of us doing our homework. If the humans don’t agree on what “ICP” means, the AI definitely won’t.
Pilots Without a Clear Human Outcome
I’d say no to experiments where we can’t articulate how this makes someone’s day meaningfully better (time saved, less rework, better decisions). Your team’s attention is your scarcest resource. Spend it where there’s a real shot at better work, not just cooler screenshots.
If this works, your day feels different.
When this works, your day shifts. Less time chasing context, more time on strategic, creative decisions. Briefs stop needing three revision rounds. Sales, product, marketing, and CS tell one coherent story. AI feels like a quiet force multiplier, not another fire drill.
Brand OS gives you the foundation. What you’re really buying is a different operating posture:
- Marketing as a strategic engine, not an emergency service.
- Growth that feels deliberate, not accidental.
- Leadership operating as one system rather than five disconnected execs.
- A rubric for evaluating AI tools against your system — not the other way around.
The tools will keep changing. The acronyms will keep multiplying. What doesn’t change is the need to really know your customer, the need for a clear, shared story, and the need for a system that lets smart people do their best work.

Ready to build your Brand OS?
If this resonated and you want to talk about what Phase 1 could look like for your team, I’m taking on a small number of engagements this quarter.