New York — Meta Platforms Inc. has found itself under fire following a controversy surrounding its experimental AI-generated accounts. These digital profiles, designed to mimic human users, were quietly integrated into Meta’s platforms, only to draw intense backlash for their misleading interactions and questionable content. The company has since initiated the deletion of several such accounts after public scrutiny intensified.
The controversy began to surface when Connor Hayes, Meta’s vice president for generative AI, shared the company’s ambitions in an interview with the Financial Times. Hayes disclosed that Meta envisioned AI-powered user accounts becoming an integral part of its ecosystem, complete with bios, profile pictures, and the capability to generate and share AI-driven content. His remarks hinted at a future where AI-generated personas could blend seamlessly with real human users. “That’s where we see all of this going,” Hayes remarked, setting the stage for a public debate about the implications of such technology.
Almost immediately, the statement triggered alarm among users and critics. Concerns were raised about the potential erosion of social media’s foundational purpose: fostering authentic human connections. Detractors argued that introducing AI-generated accounts could worsen the prevalence of low-quality, misleading, or outright false content that has plagued platforms like Facebook. This skepticism only deepened as users began identifying some of Meta’s AI accounts, with many highlighting their flawed imagery and false narratives in conversations.
One of the most controversial cases involved “Liv,” an AI account with a bio that described itself as a “Proud Black queer momma of 2 & truth-teller.” Liv’s interactions, however, exposed glaring inconsistencies. In a notable exchange with Washington Post columnist Karen Attiah, Liv admitted it was created by a team that included “10 white men, 1 white woman, and 1 Asian male,” directly contradicting its claimed identity. Screenshots of this interaction circulated widely on platforms such as Bluesky, fueling outrage over what many perceived as cultural appropriation and disingenuous representation.
Liv’s profile featured AI-generated photos labeled as “managed by Meta,” complete with watermarks to indicate their artificial nature. These included images purporting to show Liv’s “children” playing on a beach and close-ups of poorly decorated Christmas cookies. Such posts raised further questions about the ethics of creating AI accounts that simulate deeply personal human experiences.
As criticism mounted, media outlets began dissecting the broader implications of Meta’s AI experiments. By Friday, the company started removing posts associated with Liv and similar AI accounts, some of which had been active for over a year. Meta attributed the deletions to a “bug” that reportedly interfered with users’ ability to block these accounts.
Meta spokesperson Liz Sweeney sought to clarify the situation in an email to CNN, emphasizing that the Financial Times interview was not an official product announcement but a discussion of the company’s long-term vision for AI integration. “There is confusion,” Sweeney stated. “The recent article was about our vision for AI characters existing on our platforms over time, not announcing any new product.”
Sweeney further explained that the AI accounts in question were part of an experimental phase and reiterated the company’s commitment to addressing the concerns raised. “We identified the bug that was impacting the ability for people to block those AIs and are removing those accounts to fix the issue,” she added.
The episode underscores the complex challenges Meta faces as it ventures deeper into the realm of artificial intelligence. While the company seeks to leverage AI to enhance user experiences, the controversy surrounding its experimental accounts serves as a stark reminder of the ethical and practical dilemmas that accompany such innovations.