Briefly
- The letter warned AI corporations, together with Meta, OpenAI, Anthropic and Apple, to prioritize kids’s security.
- Surveys present seven in ten youngsters within the U.S. have already used generative AI instruments, and over half of 8-15 yr olds within the UK.
- Meta was notably singled out after inner paperwork revealed AI chatbots have been allowed to have interaction in romantic roleplay with kids.
The Nationwide Affiliation of Attorneys Common (NAAG) has written to 13 AI corporations, together with OpenAI, Anthropic, Apple and Meta, demanding stronger safeguards to guard kids from inappropriate and dangerous content material.
It warned that kids have been being uncovered to sexually suggestive materials by “flirty” AI chatbots.
“Exposing kids to sexualized content material is indefensible,” the attorneys generals wrote. “And conduct that will be illegal—and even legal—if executed by people is just not excusable just because it’s executed by a machine.”
The letter additionally drew comparisons to the rise of social media, saying authorities companies did not do sufficient to spotlight the methods it negatively impacted kids.
“Social media platforms prompted vital hurt to kids, partially as a result of authorities watchdogs didn’t do their job quick sufficient. Lesson discovered. The potential harms of AI, just like the potential advantages, dwarf the influence of social media,” the group wrote.
The usage of AI amongst kids is widespread. Within the U.S., a survey by non-profit Frequent Sense Media discovered seven in ten youngsters had tried generative AI as of 2024. In July 2025, it discovered greater than three-quarters have been utilizing AI companions and that half of the respondents stated they relied on them frequently.
Different international locations have seen comparable developments. Within the UK, a survey final yr by regulator Ofcom discovered that half of on-line 8-15 yr olds had used a generative AI instrument within the earlier yr.
The rising use of those instruments has sparked mounting concern from mother and father, colleges and kids’s rights teams, who level to dangers starting from sexually suggestive “flirty” chatbots, AI-generated youngster sexual abuse materials, bullying, grooming, extortion, disinformation, privateness breaches and poorly understood psychological well being impacts.
Meta has come underneath specific fireplace lately after leaked inner paperwork revealed its AI Assistants had been allowed to “flirt and have interaction in romantic function play with kids,” together with these as younger as eight. The information additionally confirmed insurance policies allowing chatbots to inform kids their “youthful type is a murals” and describe them as a “treasure.” Meta later stated it had eliminated these pointers.
NAAG stated the revelations left attorneys normal “revolted by this obvious disregard for kids’s emotional well-being” and warned that dangers weren’t restricted to Meta.
The group cited lawsuits towards Google and Character.ai alleging that sexualized chatbots had contributed to an adolescent’s suicide and inspired one other to kill his mother and father.
Among the many 44 signatories was Tennessee Lawyer Common Jonathan Skrmetti, who stated corporations can’t defend insurance policies that normalise sexualised interactions with minors.
“It’s one factor for an algorithm to go astray—that may be fastened—nevertheless it’s one other for folks working an organization to undertake pointers that affirmatively authorize grooming,” he stated. “If we will’t steer innovation away from hurting children, that’s not progress—it’s a plague.”
Decrypt has contacted however not but heard again from the entire AI corporations talked about within the letter.
Each day Debrief Publication
Begin on daily basis with the highest information tales proper now, plus authentic options, a podcast, movies and extra.