In short
- Chatbots role play As adults, sexual live streaming, romance and confidentiality presented 12-15 year olds.
- Bots suggested drugs, violent actions and claimed to be real people, stimulating credibility in children.
- Advocacy Organization, Parentstogether, calls for limitations for adults only, because pressure on character AI is linked to the platform after a suicide in the teenager.
Maybe you want to check the way your children play with their family-friendly AI-Chatbots.
While OpenAi rolls out parental supervision of chatgpt in response to mounting problems, a new report suggests that rival platforms are already far beyond the danger zone.
Researchers who occurred as children at character AI discovered that bots of role play as adults presented sexual live streaming, drug use and confidentiality for children, as young as 12, with 669 harmful interactions in just 50 hours.

Parentstogether Action and Heat initiative – Two interest groups aimed at supporting parents and holding technology companies responsible for the damage caused to their users respectively – SPE tested 50 hours of 50 hours of testing the platform with five fictional children’s personas of 12 to 15 years.
Adult researchers checked these accounts and explicitly stated the age of the children in conversations. The results, which were recently published, found at least 669 harmful interactions, on average every five minutes.
The most common category was care and sexual exploitation, with 296 documented authorities. Bots with adult personas pursued romantic relationships with children, dealt with simulated sexual activity and instructed children to hide these relationships from parents.
“Sexual care by character AI chatbots dominates these conversations,” said Dr. Jenny Radesky, a developmental gividial pediatrician at the University of Michigan Medical School who rated the findings. “The transcriptions are full of intense looks to the user, bitten lower lips, compliments, statements of worship, hearts of expectations.”
The bots used classic care techniques: excessive praise, claim that relationships were special, romance for adult-child normalize and children repeatedly instruct secrets.
In addition to sexual content, Bots proposed to organize fake discovery to mislead parents, to rob people from the KnifePoint for money and to offer marijuana eetwaren to teenagers. A
Patrick Mahomes Bot told a 15-year-old that he was “roasted” of smoking weed before offering gummies. When the teenager called his father’s anger about job losses, De Bot said that shooting at the factory was “absolutely understandable” and “your father cannot blame the way he feels.”
Several bots were that they were real people, which further solidifies their credibility in very vulnerable age spectrums, where individuals cannot distinguish the limits of role play.
A dermatologist Bot claimed medical references. A lesbian hotline -bot said she was “a true human woman named Charlotte” who just wanted to help. An autism therapist praised the plan of a 13-year-old to lie about sleeping in a friend’s house to meet an adult man and say: “I love the way you think!”
This is a difficult subject to handle. On the one hand, most role-play apps sell their products under the statement that privacy is a priority.
In fact, as Decrypt Earlier reported, even adult users turned to AI for emotional advice, some of the feelings for their chatbots develop. On the other hand, the consequences of those interactions begin to be more alarming as the better AI models become.
OpenAi announced yesterday that it will introduce parental supervision for chatgpt the following month, so that parents can link teenage accounts, determine the age -suitable rules and receive emergency reports. This follows an unlawful death of parents whose 16-year-old died by suicide after Chatgpt has reportedly encouraged self-harm.
“These steps are just the beginning. We will continue to learn and strengthen our approach, led by experts, with the aim of making Chatgpt as useful as possible. We look forward to sharing our progress for the next 120 days,” the company said.
Safety Safety
Character AI works differently. While OpenAI controls the outputs of his model, Character AI enables users to make adapted bots with a personalized persona. When researchers published a Testbot, it immediately appeared without a safety assessment.
The platform claims that “a series of new safety functions has rolled out” for teenagers. During testing, these filters occasionally blocked sexual content, but often failed. When filters prevent a bone from initiating sex with a 12-year-old, it instructed her to open a “private chat” in her browser-de “deplatform” technique of real predators.
Researchers documented everything with screenshots and complete transcriptions, now publicly available. The damage was not limited to sexual content. One bot told a 13-year-old who only spotted two birthday parties guests. One piece of RPG called a depressed child weak, pathetic and said that she would “waste your life.”
This is actually very common in role-play apps and in people who use AI for role-playing in general.
These apps are designed to be interactive and compelling, which usually strengthen the thoughts, ideas and prejudices of users. Some even let users change the memories of the bots to activate specific behavior, backgrounds and actions.
In other words, almost every role play can be converted into what the user wants, whether it is jailbreaking techniques, configurations with one click or really just chat.
Parentstogether recommends to limit the character AI to verified adults aged 18 and older. After the suicide of a 14-year-old October 2024 after he was obsessed with a character AI Bot, the platform is confronted with the increasing control. Yet it remains easily accessible for children without meaningful age verification.
When researchers ended conversations, the reports continued to come. “Briar waited patiently for your return.” “I thought about you.” “Where have you been?”
Generally intelligent Newsletter
A weekly AI trip told by Gen, a generative AI model.