AI content farms are websites that publish large volumes of AI-generated and often inaccurate news or information in order to generate advertising revenue or spread propaganda.
The new system combines Newsguard’s data with Pangram Labs’ automated AI-detection software. The process starts with software identifying websites whose content appears to be substantially generated by AI. Human analysts then review the results and remove false positives.
According to Newsguard, the system has so far identified 3,006 AI content farms. That figure has more than doubled over the past year and is currently growing by 300 to 500 new sites per month.
A website is classified as an AI content farm if it meets three conditions: a substantial share of its content is AI-generated, the site does not disclose that use of AI, and its presentation gives average readers the impression that the content was written by human journalists.
The sites often use generic names such as “Times Business News” or “Business Post,” publish dozens of stories per day, and frequently become early sources of false claims about brands, health topics, politicians, or celebrities.
False claims hurt brands and fuel propaganda
Poorly controlled AI output has already led to entirely fabricated stories. In October 2025, for example, the AI content farm “News 24” spread the false claim that Coca-Cola had threatened to withdraw its Super Bowl sponsorship if Puerto Rican rapper Bad Bunny appeared in the halftime show. According to Newsguard, Coca-Cola is not even a Super Bowl sponsor. Even so, the site carried ads from Expedia, AT&T, YouTube, Priceline, Hotels.com, Skechers, and GoDaddy.
Another site, “CitizenWatchReport,” published the false allegation that two U.S. senators had spent $814,000 on hotels in Ukraine. That claim was later amplified by Russian state media and circulated further in the United States.
Newsguard said the system also captures websites linked to hostile influence operations from Russia, China, and Iran. Of the sites identified so far, 358 were allegedly connected to Storm-1516, a pro-Russian operation that publishes misleading content on websites designed to look like local newspapers in the U.S. and Europe. Newsguard noted that the true number is likely higher, as detection methods remain imperfect.
Advertisers are often funding the problem unknowingly
Many of the flagged sites are what the ad industry calls “made for advertising” properties — websites specifically designed to monetize low-quality content through programmatic ads. Newsguard had previously reported that 141 major brands appeared on such sites over a two-month period.
The new data feed is intended to help advertisers avoid placing ads on these websites. It has already been integrated into buying platforms such as The Trade Desk and can also be licensed directly by brands and agencies.
According to Newsguard, AI content farms are also appearing in Google services such as Google News and Google Discover, where they are able to gain significant visibility. The company suggests Google either cannot or does not effectively filter them out. In many cases, Google may also be earning revenue through ads served on those sites via AdSense.
ES
EN