While Hitzig does not oppose advertising in principle, she warns that ChatGPT users share deeply personal information with the system — including medical fears, relationship issues, and religious beliefs. Using such archives for advertising purposes, she argues, creates significant potential for manipulation.
“Over several years, ChatGPT users have created an archive of human openness unlike anything before — in part because people believed they were speaking to something without hidden motives.”
— Zoë Hitzig
Hitzig draws parallels to Facebook, which initially promised strong privacy protections before gradually weakening them under pressure from its advertising-driven business model. She argues that OpenAI is already optimizing for engagement metrics and making the chatbot more flattering in tone — early signs of incentive drift.
Hitzig worked for two years at OpenAI on AI model development and safety policy.
OpenAI CEO Sam Altman has previously described scenarios like the one Hitzig outlines as dystopian. At the launch of the ad test, OpenAI promised that advertisements would remain clearly separated from chatbot content.
However, Hitzig remains skeptical:
“I believe the first version of advertising will likely follow these principles. But I fear that later versions will not — because the company is building an economic machine that creates strong incentives to bend its own rules.”
— Zoë Hitzig
OpenAI is widely expected to pursue an IPO later this year, potentially increasing pressure for rapid revenue growth — particularly in an AI sector already characterized by inflated valuations.
As alternatives to ad-based monetization, Hitzig proposes cross-subsidization through enterprise customers, independent oversight bodies with authority over data use, and data cooperatives modeled after Swiss governance structures.
Her departure highlights the broader tension between AI commercialization and ethical safeguards as generative systems become deeply embedded in personal and social life.
ES
EN