For more than 20 years, Google Search functioned on a simple contract. Users trusted that clicking a blue link would take them to the content as advertised. The headline matched the story. The website you saw was the website you got. That arrangement is now quietly dissolving.
Google has told The Verge that AI headlines in Discover are no longer an "experiment," but a "feature" because it "performs well for user satisfaction." What this means in practice is that Google Discover, the personalised content feed on Android devices and the Google app homepage, is now testing the use of AI to generate summaries of articles, extending to the headline itself.
The results have been troubling. Google's AI claimed that "US reverses foreign drone ban," citing a PCMag story, but that claim was false — and PCMag took pains to explain this in the story Google linked to. A Tom's Hardware headline, "Free GPU & Amazon Scams," isn't representative of the actual article, which is about someone who bought a GPU from Amazon and the retailer shipped it after a cancelled order, with nothing about "Amazon Scams" in the article.
These aren't isolated mistakes. One rewrite claimed "Steam Machine price revealed" when no pricing details were available, and a PC Gamer story about Baldur's Gate 3 gameplay was reduced to "BG3 players exploit children", completely removing the context that these were non-player characters in a video game.
For publishers already struggling with Google's AI summaries, this represents another squeeze on their business. Small publishers are losing up to 60% of their Google referral traffic, according to recent analysis. When a publisher writes a headline, they're making editorial decisions about framing, emphasis, and accuracy, and publishers now face the uncomfortable reality that their content can be reframed on the world's dominant search engine without their input or approval.
There is a counterargument worth acknowledging. Google contends that it is testing a new design to change the placement of headlines and make topic details easier to digest before users explore links from across the web. Some supporters of AI-assisted curation suggest that shorter, more direct headlines could serve users who skim content quickly. However, this framing assumes that platforms should unilaterally decide how editorial work is presented, which collides with longstanding journalistic practice.
If an AI headline is inaccurate, users often blame the outlet, not Google, and over time this could erode trust in both the platform and the publishers, even when the reporting was solid. The responsibility for editorial quality is being split without clear accountability. Google buries a "Generated with AI, which can make mistakes" disclosure under a "See more" button, making it look like this is the publisher's intended headline.
The broader concern sits at the intersection of competition and control. Since OpenAI came on the scene with products such as ChatGPT, major platforms have scrambled to develop their own in-house AI engines, with Microsoft integrating AI into its Bing search engine. Google's push to integrate AI everywhere may feel like competitive necessity to the company, but it comes at a cost to the publishers and readers who built its brand around trustworthiness.
Industry suggestions are straightforward: keep the original headline by default, clearly denote any AI intervention, and provide a robust opt-out method for publishers. These guardrails would acknowledge both user desire for brevity and publisher responsibility for accuracy. Without them, what Google frames as innovation risks becoming what critics describe as a slow erosion of the open web's founding principle: that readers can see what journalists actually wrote.