\n\n\n\n\n\n\n
Google, Meta, and the long history of misaligned incentives in paid media

I’m getting a mid-career executive MBA. Last week, in class, we discussed the interaction between automation and advertising. The lecture covered why A/B testing in Meta is less valuable now, since Facebook can auto-optimize faster and better than marketers can on their own.

A classmate took the logical leap and asked the professor, “If digital channels have more data and more processing power, why don’t advertisers just give them a URL and a credit card and let them go wild?”

The argument has real merit. Google, Meta, and LinkedIn have access to more data than any agency ever will. Their optimization engines are improving fast. Handing them a budget and a URL and walking away isn’t entirely crazy.

But that means we’d need to have faith in the channels to optimize media in a business’s best interests, and there’s a long, proud history of that not being the case.

1. The opt-in that wasn’t

About six years ago, we met with a Google rep who pitched a product that introduced broader, more aggressive targeting and bidding. We listened to the pitch and said no. We didn’t want to try it. The reps turned it on anyway.

What happened next was what we predicted. The campaigns spent significantly more money and didn’t generate any additional conversions.

We had to comp the client for the wasted spend, which was bad enough. But what made it worse was the principle of the thing: we hadn’t agreed to this. Google made unauthorized changes to our account.

When I tried to get the money back, Google’s position was that we’d set our campaign budgets at a certain level, and they were within their rights to spend up to that amount. That framing ignores that a budget cap is a ceiling, not an invitation. 

Our agency methodology is to never hit a budget cap. We set those numbers based on the strategy we’d approved, not the one they decided to test. I hounded them for weeks, but never got any resolution. It still makes me angry.

The reps were clearly incentivized to get adoption of the new feature. When it didn’t work, there was no accountability and no recourse. We were left covering the cost of a decision we explicitly declined.

What’s being misrepresented

Budget caps were treated as implicit consent to spend. A product we declined was activated without authorization, and when it failed, the platform pointed to our own settings as justification.

The incentive structure rewarded the reps for turning it on. There was no corresponding mechanism to make the advertiser whole when it didn’t work.

Dig deeper: Google rep’s unauthorized ad changes spark advertiser concerns

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial

Get started with

Semrush One LogoSemrush One Logo

2. The profit maximization pitch

This was years ago for a successful retainer. A pair of senior Google reps sat across from us and asked what our client’s gross margin was. Around 50%, we said. They went to the whiteboard and wrote out: if overall revenue/2 – overall media cost >= 0, then we should keep spending money on ads.

On the surface, the math sounds right. In practice, it has two problems.

  • It assumes the reported conversions are incremental, meaning they wouldn’t have happened without the paid ad. A substantial portion of any Google campaign’s reported conversions, particularly in brand and retargeting, are users who were already going to convert.
  • The model assumes a flat cost curve, where the 500th conversion costs the same as the 50th. It does not. Marginal returns fall as you scale. The last dollars of spend are always the least efficient, but they’re exactly what this pitch is designed to help Google access. (They should have said marginal revenue/2 – marginal cost = 0 is profit maximization.)

What’s being misrepresented

The model treats all reported conversions as incremental and assumes cost per conversion is constant across spend levels. Both assumptions are wrong, and together they can justify significant overspend.

3. The ‘higher CPCs buy better clicks’ pitch

This one still happens all the time. The pitch is that if you raise your CPCs, you’ll get access to higher-quality traffic. The implied logic is that conversion rate is influenced by CPC, and that if your investment isn’t high enough, you’re missing the best clicks.

There’s a version of this that has some truth to it. Higher CPCs can mean higher ad positions, which can mean higher impression frequency against the same users. More frequency can drive higher aggregate conversion rates, because repeated exposure matters.

But the argument glosses over the other side of that equation. 

  • Higher frequency has diminishing marginal returns. 
  • The third impression is worth less than the first. The tenth is worth a lot less.
  • The cost curve isn’t flat. You’re paying more per click at every step.

In practice, raising CPCs to chase quality traffic is almost always correlated with substantially worse overall return on ad spend.

This is a variant of the marginal return problem seen across these cases. The pitch frames the upside without acknowledging the cost curve. More spend gets positioned as access to better outcomes, when it often delivers the same outcomes at a higher price.

What’s being misrepresented

CPC and conversion rate are presented as if higher bids unlock better traffic. In most cases, the incremental cost outpaces the incremental return. The pitch frames diminishing returns as an opportunity, rather than a constraint.

Dig deeper: Dealing with Google Ads frustrations: Poor support, suspensions, rising costs

4. The learning phase as a get-out-of-jail card

“If your Meta campaigns are underperforming, it’s because the algorithm just needs more time to learn.”

“Don’t make changes, and don’t reduce budget, just give the platform more data.” 

This is sometimes true. Machine learning systems need volume to optimize effectively, and premature intervention can reset progress.

But “it needs to learn” has become a catch-all explanation that’s almost impossible to disprove in the short run. It explains away poor CPAs, delays accountability, and keeps spend flowing when a reasonable advertiser might otherwise pull back and reassess.

There’s rarely a clear definition of when the learning phase ends, which makes it a moving target. The learning phase ends when performance improves. If performance doesn’t improve, more learning is prescribed.

What’s being misrepresented

A real technical concept is being used in ways that resist falsification. When there’s no defined endpoint and no stated criteria for success, “it needs to learn” serves as a blank check for budgetary continuity.

5. The metric pivot: When conversions fail, sell sentiment

In many cases, YouTube or display campaigns aren’t driving measurable conversions. The rep’s suggestion: let’s look at brand measurement. We can measure recall rates, positive sentiment, and intent to purchase. These are real signals of brand health, and they matter in the long run.

But the shift from conversion to sentiment metrics tends to occur when conversion metrics are poor, not as a principled measurement strategy. Brand lift surveys measure awareness under controlled conditions, but they rely on self-reported intent and don’t connect to downstream revenue.

Recall is almost never translated into a cost per point of lift that can be compared across the media plan. You end up with a number that’s positive and presented as evidence of success, with no agreed-upon framework for what sufficient lift would look like.

What’s being misrepresented

A softer metric is substituted for a harder one after the harder one fails. Brand lift is a legitimate measurement tool when defined upfront as a success criterion. Introduced afterward, it functions as a consolation prize.

Dig deeper: PPC mistakes that humble even experienced marketers

Get the newsletter search marketers rely on.


6. Upper funnel combined with lower funnel for a blended average

Upper-funnel and lower-funnel campaigns serve different purposes and perform differently on a cost-per-acquisition basis. When a channel reports blended CPA across all campaign types, an average that looks acceptable can hide the fact that some portion of the media plan is wildly inefficient at the margin.

The argument for blending is that upper-funnel spend creates the conditions for lower-funnel performance. That is plausible, but plausibility isn’t the same as demonstrated causality. 

Often, it’s assumed the upper funnel is directly contributing and that, in aggregate, the system is profitable and fully incremental. This is never the case.

What’s being misrepresented

Aggregate CPA can look fine while specific segments of spend have no measurable return. Blending is a reporting choice, and it can obscure where money is and isn’t working.

7. View-through conversions: The numbers that shouldn’t count

A view-through conversion is counted when a user sees an ad, doesn’t click it, and then converts within some attribution window, often 24 hours or more. Platforms report these alongside click-through conversions by default. 

For retargeting campaigns, which by definition serve ads to people who have already visited your site, view-through attribution is particularly problematic. These users were likely going to return and convert regardless. The ad may have had nothing to do with it.

The issue isn’t that view-throughs aren’t meaningful. For a cold audience, some brand-influenced conversions happen without clicks.

The issue is that those conversions are almost never broken out proactively (you have to ask). And when you remove view-throughs from retargeting campaigns, the ROAS numbers can change dramatically. 

We’ve seen cases where removing VTAs cuts reported conversions by more than half. I would note that by moving to incremental measurement options, Meta has become substantially more transparent.

What’s being misrepresented

View-through conversions inflate reported performance, particularly in retargeting, where incrementality is already low. Default reporting includes them without flagging the methodological problem.

Dig deeper: Outsmarting Google Ads: Insider strategies to navigate changes like a pro

8. The competitor benchmark as a spending lever

This one is a pattern. A channel rep brings industry benchmark data to a meeting showing that your competitors are spending at a level above your current budget. The implication is clear: you’re being outspent, and you should close the gap.

Industry benchmarks are among the most valuable inputs a channel can provide. Knowing where you sit relative to the market is useful context for planning. The problem is how they get deployed. More often than not, benchmark data shows up as a tool to expand media spend, not as a neutral input into strategy.

And it works. CEOs and CMOs are particularly susceptible to this framing. Nobody wants to hear that a competitor is outspending them.

The emotional pull of “they’re investing more than you” is hard to counter with a measured conversation about marginal returns or strategic fit. The benchmark becomes the argument, and the argument is almost always “spend more.”

What gets lost is any discussion of whether:

  • The competitor’s spend is actually working for them.
  • Your business model and margins support the same level of investment.
  • The benchmark even reflects an apples-to-apples comparison.

Competitive spend data without context is just a number that makes your budget feel inadequate.

What’s being misrepresented

Benchmark data is real, but it’s selectively introduced to justify budget increases rather than treated as one input among many. The framing skips over whether the comparison is meaningful and relies on competitive anxiety to sell.

9. The default settings trap

This one is hard to frame as a single incident because it’s everywhere. I’ve talked to so many people trying to break into the industry, or launch their first campaigns, and the story is almost always the same. 

They follow the platform’s setup guide, accept the default settings, and end up opted into programs that have close to zero chance of being successful.

This is true across pretty much every major channel. 

  • LinkedIn defaults you into audience network inventory that runs outside the LinkedIn feed. 
  • Google opts you into display inventory when you’re trying to run search. Broad match keywords are set way too far out of the box. Suggested CPCs are astronomical. 
  • Google’s geographic targeting defaults to “presence or interest” rather than actual location. 

Each of these defaults, taken individually, could be defended as a reasonable starting point. Taken together, they create a setup that maximizes the platform’s revenue from day one, before the advertiser knows what’s happening.

A new advertiser following the guided setup is accepting a configuration that the platform designed, and the platform’s incentives aren’t aligned with efficient spend.

This one is genuinely difficult to solve. Platforms need to provide default settings, and they can’t expect every new advertiser to understand every option. 

But there’s something predatory about the gap between what people think they’re signing up for and what they’re getting. The defaults are revenue-optimized for the channel, not performance-optimized for the advertiser.

What’s being misrepresented

Setup guides and default settings are presented as best practices when they’re actually configurations that favor the platform’s revenue. New advertisers trust the guided experience, and have no reason to suspect the defaults are working against them.

Dig deeper: Are you being manipulated by Google Ads?

10. The tracking gap as a faith exercise

Privacy regulations and platform changes have created real limitations in conversion tracking. GDPR and Apple’s App Tracking Transparency aren’t invented problems. 

We have less visibility than we used to, and the platforms have responded by layering probabilistic modeling and modeled conversions on top of deterministic tracking.

But the tracking gap has also become a convenient shelter for underperformance. The argument goes like this:

  • “The conversions are happening, we just can’t see them all yet. There’s latency in the data.”
  • “There are limits to what can be tracked. We need a longer attribution window.”
  • “We need more time for the modeled data to populate. And in the meantime, here are some proxy metrics that we think are directionally valid, so let’s keep pushing.”

Each of those can be true in isolation. Modeled conversions take time to appear. Attribution is harder than it was five years ago. Proxy metrics can be useful when direct measurement breaks down. 

The problem is when all of these caveats get stacked together and used to justify sustained spend in the absence of any measurable result. At some point, “the data will come in” stops being a reasonable expectation and becomes an article of faith.

The tracking gap is real, but it cuts both ways. If you can’t measure the result, you also can’t prove the spend is working. The platform’s default position is to assume it is, and keep going. The advertiser’s job is to ask what happens if the modeled conversions never materialize, and what the fallback plan looks like if they don’t.

What’s being misrepresented

Legitimate tracking limitations are used to defer accountability indefinitely. When measurement is hard, the platform’s recommendation is always to maintain or increase spend, never to reduce it. The uncertainty gets resolved in the channel’s favor by default.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial

Get started with

Semrush One LogoSemrush One Logo

What does this mean for AI-run campaigns?

None of this is an argument that agencies are irreplaceable in their current form. We used to question tCPA, and now it’s a preferred bidding strategy. Automation handles execution-level work that used to require skilled practitioners. In-house teams are viable for more companies than they used to be.

But the argument for fully autonomous, channel-run advertising assumes the channel will optimize for your outcomes rather than revenue. Even if we imagine new profit-sharing contracts, this assumption carries real risk.

And I’m not blaming reps or the channels. They believe in their products, but they’re also measured on metrics that create a predictable drift in how they frame data. I should note that agencies struggle with misaligned incentives as well.

The advertiser’s job, with or without an agency, is to keep asking the inconvenient questions.

  • What is the marginal return at this spend level?
  • What percentage of conversions are view-throughs?
  • What does performance look like if we exclude brand search?
  • Are we measuring incrementality, or are we measuring correlation, and calling it causation?

Maybe the answer to everything is eventually full automation. But the entity building the machine shouldn’t be the one telling you when it’s ready.

Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. Search Engine Land is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.

Opinion#Google #Meta #long #history #misaligned #incentives #paid #media1775587386

Leave a Reply

Your email address will not be published. Required fields are marked *

Instagram

This error message is only visible to WordPress admins

Error: No feed found.

Please go to the Instagram Feed settings page to create a feed.