What is generative AI (Gen-AI) and how can it impact children’s wellbeing?

Watch the full #Take20Talk on how artificial intelligence could impact children's wellbeing.

This month, we participated in the Anti-Bullying Alliance’s #Take20Talk.

Learn about children’s use of generative AI and its impacts ahead of our upcoming research on generative AI and education.

We were delighted to participate in a #Take20Talk with the Anti-Bullying Alliance. The talk gave us the opportunity to discuss the impact of generative AI on children’s digital wellbeing.

In the session, we discussed children’s use of generative AI, the benefits and risks it presents, and the current policy landscape surrounding this evolving technology. This presentation comes ahead of our upcoming research on generative AI and education, which will be released early next year.

What is generative AI?

Generative AI (Gen-AI) is a form of artificial intelligence which produces original text, images and audio. Gen-AI models are trained on datasets used to craft new content, drawing on patterns learned during the training process.

Gen-AI uses Natural Language Processing (NLP). This is a branch of AI that “focuses on helping computers to understand, interpret and generate human language.” These technologies offer many exciting opportunities for children. However, as with all new technologies, alongside benefits, there are also potential risks to know about.

What are the opportunities of Gen-AI?

Customised learning experiences

Generative AI can help teachers customise their lesson plans and materials to better support the different learning needs of their students. This therefore ensures a more engaging and inclusive learning experience for diverse classrooms.

In fact, teachers already use generative AI tools to provide tailored support for pupils with special educational needs and disabilities (SEND).

Support through helplines

Children can benefit from helplines that use generative AI. Social and mental health support helplines built on generative AI can provide highly responsive and personalised support for young people. This can lead to immediate assistance and enhance the overall effectiveness of human support.

For example, Kids Help Phone is an online mental health service which uses generative AI technology to offer 24/7 support to children across Canada. Using NLP, this service analyses and matches the communication style of young people. This in turn directs children to the specific service channel they need, whether it’s for support with emotional distress, bullying issues or other concerns.

Helplines offer a safe and anonymous space for young people to express their feelings and concerns without fear of judgement. However, while helpline chatbots can provide quick responses, they lack nuance and adaptability that human support offers. They will therefore struggle to fully replace the effectiveness of human interaction.

Emotional support chatbots

As well as supporting and directing young people on the phone, AI chatbots can serve as virtual companions. In some cases they might provide emotional support to children who struggle to make friends or cope with social challenges. Additionally, these chatbots can support children who feel lonely or who struggle to share their feelings with others.

With the right safeguards in place, chatbots can also support children who struggle with social anxiety. These tools can offer a non-judgemental space to engage in conversations, and to develop and practice social interactions.

An example of this kind of chatbot is Harlie. Harlie is a smartphone app that uses AI technology and NLP algorithms to speak with humans. However, rather than just responding to questions, Harlie encourages a dialogue by asking questions to the user. Furthermore, with the user’s permission, Harlie can capture information on speech patterns to share with health and research teams and provide targeted therapy.

What are the potential risks?

Impacts on critical thinking

An over reliance on generative AI might negatively impact children’s critical thinking skills. This is because it reduces opportunities to engage in independent analysis and problem-solving. Moreover, using AI as the main source of knowledge might compromise the ability to question and evaluate information. See our guide to thinking critically online.

It’s important to help children incorporate AI tools into their learning without relying on them too heavily.

Parasocial relationships and bullying

There’s a concerning trend where young users copy bullying behaviours and direct them at an AI chatbot. On social media, some users encourage others to ‘bully’ chatbots through gaslighting and general mistreatment.

While children are not bullying other humans, there’s concern about how virtual interactions could normalise bullying behaviours.

Exposure to explicit content

The use of certain AI chatbots could expose children to explicit and inappropriate content. One example of such a chatbot is Replika. Replika is a custom AI chatbot which encourages users to share personal information. The more information a user shares with it, the more it can personalise its responses.

Although the website claims to only be for over-18s, it does not require any age verification. As such, children will find few barriers with using the site.

Replika encourages users to engage in explicit adult conversations. It further prompts them to pay a fee for the chatbot to share indecent pictures or facilitate a ‘romantic’ video call. Normalising the act of paying for explicit content could promote a culture where children feel it is acceptable to request, receive and send inappropriate images — both with the chatbot and among themselves.

Illegal content generation

Reports suggest that children increasingly use AI tools to generate indecent images of peers, facilitated by easily accessible ‘declothing’ apps. Indecent images of under-18s are illegal no matter the circumstances of their production. This includes child sexual abuse material (CSAM) produced with deepfake technology.

While the dynamics of deepfake production and distribution differ from other forms of image-based sexual abuse, the harm for victims is likely to be just as severe, if not more so.

What teenagers say about deepfake technology

To explore the impacts of deepfake technology in more detail, we held a series of focus groups earlier this year. The focus groups featured teenagers aged 15-17 and covered the subject of online misogyny. This also included gender-dynamics which underpin child-on-child sexual harassment and abuse.

Participants discussed sexual abuse involving deepfake technology – an issue that has hit the headlines multiple times this year, in both the UK and internationally. Teenagers – in particular female participants – generally shared the view that being a victim to deepfake abuse could in fact be more harmful than conventional forms of non-consensual image-sharing.

Participants told us that the intensity of harm lies in the lack of agency and control that they would feel in a deepfake. This is because they would have no knowledge of or consent to its production:

“I think that the deepfake would be a lot worse maybe, because, with a nude, you’ve took it as well, so you know about it, whereas the deepfake, you won’t have any clue at all. There could literally be one out right now and no one could know” – Girl, aged 15-17, Internet Matters focus group.

Upcoming research into generative AI

The UK government has decided against introducing new legislation on AI. Instead, it will depend on existing legislative frameworks to regulate the production and use of new AI technologies.

It’s still unclear if this light-touch approach will sufficiently protect individuals — especially children — from the full range of risks posed by AI, including emerging harms from new services and technologies.

It’s important that young people’s and parents’ views and concerns are accounted for in policy-making around the uses of generative AI. This is particularly true when it comes to the technology’s use in education where the possible applications may be the most impactful.

So, we’re excited to announce new research into the impacts of generative AI on education, based on the views of children and parents. The research will explore how families and schools are using Gen-AI, and children’s and parents’ hopes and concerns for the future.

Was this useful?
Tell us how we can improve it

Recent posts