In June 2025, we asked Google: “Does Squarespace support AMP?”
The AI Overview said yes. That was wrong—Squarespace had deprecated AMP support four months earlier.
We run a niche marketing blog that Google’s core ranking systems have consistently rewarded—even above larger, noisier sources. So when a top-ranked, factually correct article gets bypassed by AI Overviews, it points to a deeper issue in how the system weights content.
What Are AI Overviews?
Google’s AI Overviews (AIOs) are generative summaries that appear at the top of some search results. They use large language models to synthesize information from the web and present a quick answer to a query, often with citations. Unlike traditional search results, they don’t just rank pages, they attempt to summarize consensus across sources. AI Overviews have become a prominent and default experience for many searchers.
Why This Query Mattered
Google’s AI Overviews have been live for over a year now, but they still struggle with low-signal, binary questions—especially when newer, accurate answers contradict older consensus. This specific query is a clean diagnostic case for how well AIOs handle recency, source trust, and consensus bias:
Binary: Yes/no answer.
Low signal: No press release, no documentation from the Squarespace platform—only our updated post and a related forum thread.
Time buffer: We’ve seen Google’s AI summaries incorporate new content shortly after indexing. Four months was plenty of time for the AMP deprecation to be reflected—if the system had weighted the signals appropriately.
No visible first-party confirmation: Squarespace removed their AMP support doc. Only an obscure discontinued features list references the change.
So what happens when AI Overviews face isolated, accurate information that isn’t broadly echoed?
What We Observed
Our article—the only one with a correct, detailed explanation—was ignored. Instead, the AI Overview grounded its response in outdated sources and confidently returned:
“Yes, Squarespace supports AMP (Accelerated Mobile Pages).”
We reported it as factually incorrect. Roughly a week later, the answer changed—now citing Squarespace’s discontinued features page:
“Yes, Squarespace used to support Accelerated Mobile Pages (AMP), but this feature has been discontinued…”
But as of this writing, the summary still toggles between the outdated answer (“Yes”) and the partial correction (“Yes, but no longer”). Meaning the system ingested the discontinued features list, but didn’t immediately weight it above older, redundant content. The correct answer is: No.
What This Tells Us About AIOs Behavior
Google has stated that AI Overviews are built to synthesize information from high-quality content already surfaced in Search. But in this case the top-ranked, accurate page wasn’t used. The problem isn’t visibility—it’s how the system weights conflicting information against older, redundant consensus.
This low-stakes, factual query shows AI Overview behavior is reactive, not truth-seeking:
Consensus over correctness: Repeated, conventional info is favored—even if outdated or wrong.
Ranking ≠ grounding: AIOs may ignore top-performing content if it contradicts older consensus.
Feedback can work: Our report likely triggered a system recheck; the answer updated in about a week.
First-party helps, but doesn’t guarantee correction: Only after a Squarespace-owned page entered the system did the Overview begin to shift. Even then, it didn’t fully replace the outdated answer.
Freshness isn’t enough. It required user feedback before recent information was incorporated into the AIO synthesis. Even then, the top organic page was ignored and Squarespace’s buried discontinued features list was used.
Why This Still Matters
AI Overviews are a prominent and default experience for most users. And the system lags in cases like this, where:
You’re the only source saying it
There’s no redundant coverage
No platform-owned or high-authority (.gov, .edu, major brand) source confirms it
If you’re wondering whether Google’s AI Overviews are accurate and trustworthy—the answer is: not always.
As things stand, when changes happen in low-coverage spaces, AIOs are likely to deliver confidently wrong answers—and keep doing so until there’s enough signal to trigger a shift. That shift can happen when a “safe” source enters the system, when enough redundant coverage builds up, or when user feedback prompts a re-evaluation.
Search vs. AI Overviews vs. AI Mode
This isn’t a failure of Google Search. Our page on the topic ranks as expected. But AI Overviews don’t use the same logic as the core ranking systems—visibility and accuracy aren’t enough.
Both ChatGPT and Gemini answered the question correctly, citing our blog. Google’s newer AI Mode did the same.
This isn’t a coverage issue, it’s a weighting problem inside AI Overviews. Our content consistently sees organic traffic from Google search and is cited by large language models. When top-ranked, truthful, up-to-date content gets bypassed, it reveals a flaw in the AIO system—not the content.
What are we doing about it?
In this case, we’re testing and observing. We’re tracking how long it takes Google’s overviews to settle on the correct answer under different inputs. We’re also monitoring another low-signal, binary query to see how the system behaves without any intervention.
More broadly, we’re not chasing AI Overview placement or adjusting our content strategy around it. AIOs are still a moving target—rewarding consensus and repetition, not accuracy or expertise.
Consensus might help AI-powered search stabilize. But conformity isn’t a strategy, and the consensus-first behavior won’t hold forever. Summary systems will get better at weighting. We’ll continue with our holistic approach to visibility and keep publishing durable, accurate content—whether or not AI Overviews recognize it right away.