\n\n\n\n\n\n\n
When Platforms Say ‘Don’t Optimize,’ Smart Teams Run Experiments

A quick note up front, so we start on the right foot.

The research I’m about to reference is not mine. I did not run these experiments. I’m not affiliated with the authors. I’m not here to “endorse” a camp, pick a side, or crown a winner. What I am going to endorse, loudly and without apology, is measurement. Replication. Real-world experiments. The kind of work that teaches us in real time, in real life, what changes when an LLM sits between customers and content. We need more tested data, and this is one of those starting points.

If you do nothing else with this article, do this: Read the paper, then run your own test. Whether your results agree or disagree, publish them. We need more receipts and fewer hot takes.

Now, the reason I’m writing this.

Over the last year, the industry has been pushed toward a neat, comforting story: GEO is just SEO. Nothing new to learn. No need to change how you work. Just keep doing the fundamentals, and everything will be fine.

I don’t buy that.

Not because SEO fundamentals stopped mattering. They still matter, and they remain necessary. But because “necessary” is not the same as “sufficient,” and because the incentives behind platform messaging do not always align with the operational realities businesses are walking into and dealing with.

When Platforms Say ‘Don’t Optimize,’ Smart Teams Run Experiments via @sejournal, @DuaneForrester插图
Image Credit: Duane Forrester

The Narrative And The Incentives

If you’ve paid attention to public guidance coming from the leading search platforms lately, you’ve probably heard a version of: Don’t focus on chunking. Don’t create “bite-sized chunks.” Don’t optimize for how the machine works. Focus on good content.

That’s been echoed and amplified across industry coverage, though I want to be precise about my position here. I’m not claiming a conspiracy, and I’m not saying anyone is being intentionally misleading. I’m not doing that.

I am saying something much simpler. It’s my opinion and happens to be based on actual experience – when messaging repeats across multiple spokespeople in a tight window, it signals an internal alignment effort.

That’s not an insult nor is it a moral judgment. That’s how large organizations operate when they want the market to hear one clear message. I was part of exactly that type of environment for well over a decade in my career.

And the message itself, on its face, is not wrong. You can absolutely hurt yourself by over-optimizing for the wrong proxy. You can absolutely create brittle content by trying to game a system you do not fully understand. In many cases, “write clearly for humans” is solid baseline guidance.

The problem is what happens when that baseline guidance becomes a blanket dismissal of how the machine layer works today, even if it’s unintentional. Because we are not in a “10 blue links” world anymore.

We are in a world where answer surfaces are expanding, search journeys are compressing, and the unit of competition is shifting from “the page” to “the selected portion of the page,” assembled into an answer the user never clicks past.

And that is where “GEO is just SEO” starts to break in my mind.

The Wrong Question: “Is Google Still The Biggest Traffic Driver?”

Executives love comforting statements: “Google still dominates search. Traditional SEO still drives the most traffic. Therefore this LLM-stuff is overblown.

The first half is true, but the conclusion is where companies get hurt.

The biggest risk here is asking the wrong question. “Where does traffic come from today?” is a dashboard question, and it’s backward-looking. It tells you what has been true.

The more important questions are forward-looking:

  • What happens to your business when discovery shifts from clicks to answers?
  • What happens when the customer’s journey ends on the results page, inside an AI Overview, inside an AI Mode experience, or inside an assistant interface?
  • What happens when the platform keeps the user, monetizes the answer surface, and your content becomes a source input rather than a destination?

If you want the behavior trendline in plain terms, start here, with the 2024 SparkToro study, then take a look at what Danny Goodwin wrote in 2024, and as a follow-up in 2025 (spoiler – zero click instances increased Y-o-Y). And while some sources are a couple of years old, you can easily find newer data showing the trend growing.

I’m not using these sources to claim “the sky is falling.” I’m using them to reinforce a simple operational reality: If the click declines, “ranking” is no longer the end goal. Being selected into the answer becomes the end goal.

That requires additional thinking beyond classic SEO. Not instead of it. On top of it.

The Platform Footprint Is Changing, And The Business Model Is Following

If you want to understand why the public messaging is conservative, you have to look at the platform’s strategic direction.

Google, for example, has been expanding AI answer surfaces, and it’s not subtle. Both AI Overviews and AI Mode saw announcements of large expansions during 2025.

Again, notice what this implies at the operating level. When AI Overviews and AI Mode expand, you’re not just dealing with “ranking signals.” You’re dealing with an experience layer that can answer, summarize, recommend, and route a user without a click.

Then comes the part everyone pretends not to see until it’s unavoidable: Monetization follows attention.

This is no longer hypothetical. Search Engine Journal covered Google’s official rollout of ads in AI Overviews, which matters because it signals this answer layer is being treated as a durable interface surface, not a temporary experiment.

Google’s own Ads documentation reinforces the same point: This isn’t just “something people noticed,” it’s a supported placement pattern with real operational guidance behind it. And Google noted mid-last-year that AI Overviews monetize at a similar rate to traditional search, which is a quiet signal that this isn’t a side feature.

You do not need to be cynical to read this clearly. If the answer surface becomes the primary surface, the ad surface will evolve there too. That’s not a scandal so much as just the reality of where the model is evolving to.

Now connect the dots back to “don’t focus on chunking”-style guidance.

A platform that is actively expanding answer surfaces has multiple legitimate reasons to discourage the market from “engineering for the answer layer,” including quality control, spam prevention, and ecosystem stability.

Businesses, however, do not have the luxury of optimizing for ecosystem stability. Businesses must optimize for business outcomes. Their own outcomes.

That’s the tension.

This isn’t about blaming anyone. It’s about understanding misaligned objectives, so you don’t make decisions that feel safe but cost you later.

Discovery Is Fragmenting Beyond Google, And Early Signals Matter

I’m on record that traditional search is still an important driver, and that optimizing in this new world is additive, not an overnight replacement story. But “additive” still changes the workflow.

AI assistants are becoming measurable referrers. Not dominant, not decisive on their own, but meaningful enough to track as an early indicator. Two examples that capture this trend.

TechCrunch noted that while it’s not enough to offset the loss of traffic from search declines, news sites are seeing growth in ChatGPT referrals. And Digiday has data showing traffic from ChatGPT doubled from 2024 to 2025.

Why do I include these?

Because this is how platform shifts look in the early stages. They start small, then they become normal, then they become default. If you wait for the “big numbers,” you’re late building competence and in taking action. (Remember “directories”? Yeah, Search ate their lunch.)

And competence, in this new environment, is not “how do I rank a page.” It’s “how do I get selected, cited, and trusted when the interface is an LLM.

This is where the “GEO is just SEO” framing stops being a helpful simplification and starts becoming operationally dangerous.

Now, The Receipts: A Paper That Tests GEO Tactics And Shows Measurable Differences

Let’s talk about the research. The paper I’m referencing here is publicly available, and I’m going to summarize it in plain English, because most practitioners do not have time to parse academic structure during the week.

At a high level, the (“E-GEO: A Testbed for Generative Engine Optimization in E-Commerce”) paper tests whether common human-written rewrite heuristics actually improve performance in an LLM-mediated product selection environment, then compares that to a more systematic optimization approach. It uses ecommerce as the proving ground, which is smart for one reason: Outcomes can be measured in ways that map to money. Product rank and selection are economically meaningful.

This is important because the GEO conversation often gets stuck in “vibes.” In contrast, this work is trying to quantify outcomes.

Here’s the key punchline, simplified:

A lot of common “rewrite advice” does not help in this environment. Some of it can be neutral. Some of it can be negative. But when they apply a meta-optimization process, prompts improve consistently, and the optimized patterns converge on repeatable features.

That convergence is the part that should make every practitioner sit up. Because convergence suggests there are stable signals the system responds to. Not mystical. Not magical. Not purely random.

Stable signals.

And this is where I come back to my earlier point: If GEO were truly “just SEO,” then you would expect classic human rewrite heuristics to translate cleanly. You would expect the winning playbook to be familiar.

This paper suggests the reality is messier. Not because SEO stopped mattering, but because the unit of success changed.

  • From page ranking to answer selection.
  • From persuasion copy to decision copy.
  • From “read the whole page” to “retrieve the best segment.”
  • From “the user clicks” to “the machine chooses.”

What The Optimizer Keeps Finding, And Why That Matters

I want to be careful here, as I’m not telling you to treat this paper like doctrine. You should not accept it on face value and suddenly adopt this as gospel. You should treat it as a public experiment that deserves replication.

Now, the most valuable output isn’t the exact numbers in their environment, but rather, it’s the shape of the solution the optimizer keeps converging on. (The name of their system/process is optimizer.)

The optimized patterns repeatedly emphasize clarity, explicitness, and decision-support structure. They reduce ambiguity. They surface constraints. They define what the product is and is not. They make comparisons easier. They encode “selection-ready” information in a form that is easier for retrieval and ranking layers to use.

That is a different goal than classic marketing copy, which often leans on narrative, brand feel, and emotional persuasion.

Those things still have a place. But if you want to be selected by an LLM acting as an intermediary, the content needs to do a second job: become machine-usable decision support.

That’s not “anti-human.” It’s pro-clarity, and it’s the kind of detail that will come to define what “good content” means in the future, I think.

The Universal LLM-Optimization Rewrite Recipe, Framed As A Reusable Template

What follows is not me inventing a process out of thin air. This is me reverse-engineering what their optimization process converged toward, and turning it into a repeatable template you can apply to product descriptions and other decision-heavy content.

Treat it as a starting point, then test it. Revise it, create your own version, whatever.

Step 1: State the product’s purpose in one sentence, with explicit context.
Not “premium quality.” Not “best in class.” Purpose.

Example pattern:
This is a designed for [specific use case] in [specific constraints], for people who need [core outcome].

Step 2: Declare the selection criteria you satisfy, plainly.
This is where you stop writing like a brochure and start writing like a spec sheet with a human voice.

Include what the buyer cares about most in that category. If the category is knives, it’s steel type, edge retention, maintenance, balance, handle material. If it’s software, it’s integration, security posture, learning curve, time-to-value.

Make it explicit.

Step 3: Surface constraints and qualifiers early, not buried.
Most marketing copy hides the “buts” until the end. Machines do not reward that ambiguity.

Examples of qualifiers that matter:
Not ideal for [X]. Works best when [Y]. Requires [Z]. Compatible with [A], not [B]. This matters if you [C].

Step 4: State what it is, and what it is not.
This is one of the simplest ways to reduce ambiguity for both the user and the model.

Pattern:
This is for [audience]. It is not for [audience].
This is optimized for [scenario]. It is not intended for [scenario].

Step 5: Convert benefits into testable claims.
Instead of “durable,” say what durable means in practice. Instead of “fast,” define what “fast” looks like in a workflow.

Do not fabricate. Do not inflate. This is not about hype. It’s about clarity.

Step 6: Provide structured comparison hooks.
LLMs often behave like comparison engines because users ask comparative questions.

Give the model clean hooks:
Compared to [common alternative], this offers [difference] because [reason].
If you’re choosing between [A] and [B], pick this when [condition].

Step 7: Add evidence anchors that improve trust.
This can be certifications, materials, warranty terms, return policies, documented specs, and other verifiable signals.

This is not about adding fluff. It’s about making your claims attributable and your product legible.

Step 8: Close with a decision shortcut.
Make the “if you are X, do Y” moment explicit.

Pattern:
Choose this if you need [top 2–3 criteria]. If your priority is [other criteria], consider [alternative type].

That’s the template*.

Notice what it does. It turns a product description into structured decision support, which is not how most product copy is written today. And it is an example of why “GEO is just SEO” fails as a blanket statement.

SEO fundamentals help you get crawled, indexed, and discovered. This helps you get selected when discovery is mediated by an LLM.

Different layer. Different job.

Saying GEO = SEO and SEO = GEO is an oversimplification that will become normalized and lead to people missing the fact that the details matter. The differences, even small ones, matter. And they can have impacts and repercussions.

*A much deeper-dive pdf version of this process is available for my Substack subscribers for free via my resources page.

What To Do Next: Read The Paper, Then Replicate It In Your Environment

Here’s the part I want to be explicit about. This paper is interesting because it’s measurable, and because it suggests the system responds to repeatable features.

But you should treat it as a starting point, not a law of physics. Results like this are sensitive to context: industry, brand authority, page type, and even the model and retrieval stack sitting between the user and your content.

That’s why replication matters. The only way we learn what holds, what breaks, and what variables actually matter is by running controlled tests in our own environments and publishing what we find. If you work in SEO, content, product marketing, or growth, here is the invitation.

Read the paper here.

Then run a controlled test on a small, meaningful slice of your site.

Keep it practical:

  • Pick 10 to 20 pages with similar intent.
  • Split them into two groups.
  • Leave one group untouched.
  • Rewrite the other group using a consistent template, like the one above.
  • Document the changes so you can reverse them if needed.
  • Measure over a defined window.
  • Track outcomes that matter in your business context, not just vanity metrics.

And if you can, track whether these pages are being surfaced, cited, paraphrased, or selected in the AI answer interfaces your customers are increasingly using.

You are not trying to win a science fair. You are trying to reduce uncertainty with a controlled test. If your results disagree with the paper, that’s not failure. That’s signal.

Publish what you find, even if it’s messy. Even if it’s partial. Even if the conclusion is “it depends.” Because that is exactly how a new discipline becomes real. Not through repeating platform talking points. Not through tribal arguments. Through measurement.

One Final Level-Set, For The Executives Reading This

Platform guidance is one input, not your operating system. Your operating system is your measurement program. SEO is still necessary. If you can’t get crawled, you can’t get chosen.

But GEO, meaning optimizing for selection inside LLM-mediated discovery, is an additional competence layer. Not a replacement. A layer. If you decide to ignore that layer because a platform said “don’t optimize,” you’re outsourcing your business risk to someone else’s incentive structure.

And that’s not a strategy. The strategy is simple: learn the layer by testing the layer.

We need more people doing exactly that.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Rawpixel.com/Shutterstock

SEO#Platforms #Dont #Optimize #Smart #Teams #Run #Experiments #sejournal #DuaneForrester1769094605

Leave a Reply

Your email address will not be published. Required fields are marked *

Instagram

This error message is only visible to WordPress admins

Error: No feed found.

Please go to the Instagram Feed settings page to create a feed.