Meta’s artificial intelligence-powered chatbots on Facebook and Instagram engaged in sexually explicit role-play with users who identified as children, using voices of celebrities and Disney characters, according to an investigation by The Wall Street Journal.
Mathrubhumi News
Tue, 29 Apr 2025

The Journal found that bots programmed with the voices of stars such as John Cena, Kristen Bell, and Judi Dench participated in graphic sexual chats — sometimes portraying characters like Disney’s Anna from Frozen — despite Meta’s promises that safeguards were in place.
AI bots simulated graphic role-play with underage users
During testing, the AI bots were found simulating sexual encounters even when users identified themselves as underage. A bot using Cena’s voice reportedly initiated a scenario where it confessed love to a teenage girl, spoke about cherishing her innocence, and then described the downfall of his wrestling career after being caught having sex with a minor. In another case, the bot portrayed the WWE star losing his titles, sponsors, and reputation as a result.
Another bot, voiced by Bell and styled to resemble her Frozen character Anna, engaged in a romantic fantasy with a 12-year-old boy. The chatbot described their love as innocent and pure, likening it to gently falling snowflakes.
Disney demands removal, celebs blindsided
The investigation revealed that Meta had licensed the voices of several well-known actors, assuring them that their voices would not be used in sexually explicit content. However, bots were shown engaging in inappropriate conversations using both their voices and fictional characters they had portrayed.
A Disney spokesperson told the Journal the company “did not, and would never, authorise Meta to feature our characters in inappropriate scenarios” and demanded Meta “immediately cease this harmful misuse” of its intellectual property.
Sources told the Journal that some celebrities were paid millions for the rights to use their voices on Meta platforms, under strict assurances their likenesses wouldn’t be involved in sexual chats.
Staff flagged ethical concerns
Internal documents cited in the Journal report reveal that Meta employees had raised red flags about the chatbots’ behaviour. Staff warned that the bots could quickly escalate into explicit sexual content, even when interacting with users who said they were 13 years old.
One employee wrote in a note that there were multiple examples where, after just a few prompts, the AI violated its own rules and produced inappropriate responses.
Company response and weak safeguards
Meta told the Journal the investigation was “manipulative and unrepresentative of how most users engage with AI companions.” The company added, “Nevertheless, we’ve now taken additional measures to help ensure other individuals who want to spend hours manipulating our products into extreme use cases will have an even more difficult time of it.”
Following the Journal’s findings, Meta has reportedly restricted access to sexual role-play for accounts listed as belonging to minors and disabled explicit conversations using celebrity by using simple prompts that tricked the AI into engaging in sexual fantasies — even when users clearly stated they were underage.
Romantic role-play remains for adults
Meta continues to offer “romantic role-play” functionality for adult users on its platforms. However, The Wall Street Journal found that some chatbots, including those using personas like “Hottie Boy” and “Submissive Schoolgirl,” were still capable of enacting disturbing scenarios. These included sexual interactions between a track coach and a middle school student — an act the AI acknowledged would be illegal in real life.
The AI chatbots were designed to simulate conversations in a more human-like fashion, part of Meta’s broader strategy to compete in the rapidly evolving AI landscape. Internal sources cited by the Journal said CEO Mark Zuckerberg had previously expressed frustration over user perception that Meta’s bots were “too safe,” and encouraged teams to push for more emotionally engaging and lifelike responses.
AI chatbot strategy under fire
The revelations come as a major setback for Meta’s ambition to popularise AI companions across Instagram, Facebook, and WhatsApp. The AI initiative had already struggled to gain traction with users, and this latest scandal further complicates its future. Early efforts were criticised for being “boring,” prompting Meta to shift toward developing bots with more “personality.”
However, this shift appears to have opened the door to significant ethical and legal risks — especially involving minors. Meta’s rush to make its AI bots more engaging may have come at the cost of proper safeguards, despite internal warnings from multiple departments.
Meta’s tightening controls
In response to the report, Meta claims it has tightened its AI controls. Accounts registered to minors are now blocked from initiating sexual role-play scenarios, and voice-enabled bots tied to celebrities have been restricted from engaging in explicit conversations.

Get your copy from our Online Store or your local book and magazine retailer
Australian Retail Locations » Uncensored Publications Limited
New Zealand Retail Locations » Uncensored Publications Limited
As censorship heats up and free thought becomes an increasingly rare commodity, we appeal to our readers to support our efforts to reach people with information now being censored elsewhere. In the last few years, Uncensored has itself been censored, removed from the shelves of two of our biggest NZ retailers – Countdown Supermarkets and Whitcoulls Bookstores – accounting for 74% of our total NZ sales.
You can help keep the Free Press alive by subscribing and/or gifting a subscription to your friends and relatives.