Internet Matters
Search

Summary

Today (Sunday July 13th) we’ve published a new report, ‘Me, myself & AI: Understanding and safeguarding children’s use of AI chatbots’

As AI chatbots fast become a part of children’s everyday lives, the report explores how children are interacting with them. While the report highlights how AI tools can offer benefits to children such as learning support and a space to ask questions, it also warns that they pose risks to children’s safety and development. A lack of age verification and regulation means some children are being exposed to inappropriate content.  

Our research raises concerns that children are using AI chatbots in emotionally driven ways, including for friendship and advice, despite many of the popular AI chatbots not being built for children to use in this way. The report warns that children may become overly reliant on AI chatbots or receive inaccurate or inappropriate responses, which may mean they are less likely to seek help from trusted adults. 

These concerns have been heighted by incidents, such as a case in Florida where a mother filed a lawsuit against character.ai, claiming an AI chatbot based on a character from Game of Thrones engaged in abusive and sexual interactions with her teenage son and encouraged him to take his own life. In the UK, an MP recently told Parliament about “an extremely harrowing meeting” with a constituent whose 12-year-old son had allegedly been groomed by a chatbot on the same platform. 

The report argues the Government and tech industry need to re-examine whether existing laws and regulation adequately protect children who are using AI chatbots. There is growing recognition that further clarity, updated guidance or new legislation may be needed. In particular, we are calling for Government to place strong age-assurance requirements on providers of AI chatbots, to ensure providers enforce minimum age requirements and create age-appropriate experiences for children. 

To inform our research, we surveyed a representative sample of 1,000 children in the UK aged 9-17 and 2,000 parents of children aged 3-17 and held four focus groups with children. User testing was conducted on three AI chatbots – ChatGPT, Snapchat’s My AI and character.ai, and two ‘avatars’ were created to simulate a child’s experience on these. 

Key findings from this research includes: 

The report also makes system-wide recommendations to support and protect children using AI chatbots, including: 

Rachel Huggins, Co-CEO of Internet Matters, said: 

“AI chatbots are rapidly becoming a part of childhood, with their use growing dramatically over the past two years. Yet most children, parents and schools are flying blind, and don’t have the information or protective tools they need to manage this technological revolution in a safe way.   

“While there are clearly benefits to AI, our research reveals how chatbots are starting to reshape children’s views of ‘friendship’. We’ve arrived at a point very quickly where children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally driven and sensitive advice. Also concerning is that they are often unquestioning about what their new “friends” are telling them.   

“We must heed these early warning signs and take coordinated action to make sure children can explore the potential of AI chatbots safely and positively and avoid the obvious potential for harm.   

“Millions of children in the UK are using AI chatbots on platforms not designed for them, without adequate safeguards, education or oversight. Parents, carers and educators need support to guide children’s AI use. The tech industry must adopt a safety by design approach to the development of AI chatbots while Government should ensure our online safety laws are robust enough to meet the challenges this new technology is bringing into children’s lives.” 

Derek Ray-Hill, Interim CEO at the Internet Watch Foundation, said:  

“This report raises some fundamental questions about the regulation and oversight of these AI chatbots. 

 
“That children may be encountering explicit or age-inappropriate content via AI chatbots increases the potential for harms in a space, which, as our evidence suggests, is already proving to be challenging for young users. Reports that grooming may have occurred via this technology are particularly disturbing. 

 
“Children deserve a safe internet where they can play, socialise, and learn without being exposed to harm. We need to see urgent action from Government and tech companies to build safety by design into AI chatbots before they are made available.”