MENU

Why media literacy?

Since our founding, the core policy focus in the UK has shifted away from empowering children and parents with the skills they need to stay safe online, and towards regulation of online platforms. While online safety regulation is absolutely essential, and an important area of progress in terms of creating a level playing field for children online, it will not eliminate all risks and harm. As things stand, too many children and parents lack the skills and knowledge they need to be resilient in the face of a fast-evolving online world.

With political attention having centred on the long passage of the Online Safety Act in recent years, the media literacy agenda has been somewhat neglected in comparison, lacking the same scale of ambition. The stated aim of the Online Safety Act is to make the UK the safest place to be online, but this goal will not be achieved without a much more expansive media literacy offer.ii That is not to say there has been no work in this space – far from it. There has been much excellent work across the sector, guided in recent years by media literacy strategies from both the Department for Science, Innovation and Technology (DSIT) and Ofcom in 2021. Yet much of this work is piecemeal, time-bound and under-resourced. The result is that media literacy among families remains stubbornly poor.

5 big ideas for change

Every child should leave school with the skills they need to stay safe, be a critical thinker and behave responsibly online. Here are our five big ideas to transform media literacy through the classroom.

Raise the status of media literacy

Train teachers for the digital age

Embrace a whole-family approach

Set standards and generate insights

Build a cross-sector coalition

Learn about these ideas below.

How children learn about online challenges

“Children are naturally inquisitive and will explore,” says Lisa. “Unfortunately, they can easily fall down a ‘rabbit hole’ of continuous negativity and harmful algorithms.” Algorithms are a key part of social media platforms. They work by learning from user behaviour to suggest similar content. If your child ‘likes’, comments on, shares or even views certain content, the algorithm ‘learns’ what they enjoy. The problem is, algorithms do not currently determine what could potentially cause harm.

Hollie adds that the whole concept of suggested content is “quite creepy.” While the algorithm can help users explore content they enjoy, it does so by tracking what they do.

If your child fully watches a video that features a dangerous challenge, even by accident, they are likely to see similar content. The same is true if other users who have similar interests watch the content. Algorithms will suggest content based on other users’ behaviour as well.

What are the signs to look out for?

Hollie highlights the importance of investigating what your child says, especially if it relates to something that could lead to harm.

If your child is taking part in dangerous challenges, or if they’re seeing content that could influence them to do so, you might also notice other changes. It’s important to pay close attention to any behavioural changes or new interests. If they become more insular, says Hollie, and spend a lot more time in their room, you need to stay on top of what they’re doing.

“Check in on them,” says Hollie, “Talk to them.” Listen to what’s going on in their headset and ask them to show you what they’re watching. Talk about risky behaviours such as clicking on pop-up ads in the games they play or watching videos that promote harmful content.

If you do find signs that they’re watching or participating in these challenges, it’s also important to remain calm and avoid accusations. “In no way should we make them feel they have done something wrong,” says Lisa.

4 tips to prevent harm from online challenges

Have conversations early on

Lisa highlights the importance of keeping children safe while respecting their privacy. 'As parents we naturally want to protect our children . . . but the reality is we cannot police our children 24/7. We have to respect their privacy.' Talking about online safety early is important to helping them keep themselves safe. Talk to them about risky behaviours, what to avoid and how to get support if something goes wrong.

LEARN ABOUT DIGITAL RESILIENCE

Encourage them to be mindful of uncomfortable content

'It's so important to ensure your child can talk to a parent or a responsible adult if they see something on social media that makes them feel uncomfortable,' advises Lisa. So, encourage your child to think about how content makes them feel, and explain what they can do (i.e., coming to you) if it makes them uneasy.

Understand the power of the internet

'The internet is a fantastic tool,' says Lisa, 'and yet extremely dangerous if its power is used to manipulate our young generation.' Hollie adds, 'Just because your home is your safe place for your children, don't assume they are safe online.' As parents, it's important to recognise that there are both benefits and risks so you can take action to help children experience more benefits.

Review their privacy and security

'Internet access is like inviting a billion strangers into your home,' says Hollie. She explains that it's unlikely you'd want your child interacting offline with everyone they might come across on social media. So, review the controls you have on their devices, apps and mobile or broadband networks to limit unwanted contact or content.

SEE PARENTAL CONTROLS GUIDES
Was this useful?
Tell us how we can improve it

Key findings from the research

The research began with a review of existing literature and messaging. Following this, we held panel discussions with 11-17-year-olds to gather their perspectives on the right preventative messaging.

Round 1

Round 1 panels considered the effectiveness of existing prevention messages. Here’s what we found:

Current barriers to effective education

Many children said there were barriers in how they were taught about sexual image-sharing in school. They said that they received no specific education in relation to sexual image-sharing or only superficial coverage.

They also said whole-class, mixed gender lessons on the topic meant they struggled to contribute and share. Additionally, they said they currently learned about it too late.

Views on current messaging

All children agreed that messaging around healthy relationships and withstanding pressure negative attention was effective. However, there were some key distinctions by gender.

Girls felt that preventative messaging should focus more on harmful behaviour from perpetrators.

Boys showed preference towards messaging around the moral and legal consequences of pressuring and sharing nude images.

Round 2

In Round 2, panels explored how to reach children with effective prevention messaging.

Discussing and interrogating issues

Children and young people said they wanted the opportunity to take part in discussions and interventions that focused on issues around sexual image-sharing. They said they wanted these sessions led by someone with confidence and expertise in the subject.

Types of delivery

Panels were asked to suggest which delivery routes for educational messaging would be most effective. They were most supportive of discussion-based learning, learning through games and interventions on social media.

Round 3

Finally, in Round 3, panels tested prototype interventions that combined refined gender-specific prevention messages and deployment methods. Overall, panels felt it important to use various interventions to reinforce the messaging in different contexts (e.g. in the classroom and on devices).

Single-sex RSHE lesson

Children appreciated the single-sex RSHE lesson. They liked learning in smaller groups that were split by gender. This allowed them more interactivity and discussion, which they enjoyed along with tailored messaging.

Interactive game

Children were very positive above the game, especially its interactivity and scenario-based format. They liked seeing the impacts of their decisions in different situations and having the ability to make decisions based on different perspectives. Lastly, they liked working independently and without judgement.

On-platform nudges

Children were positive about the nudge technique and recognised the benefits of having a barrier before sending an image. Most thought it would cause people to re-think their decision and highlight the seriousness of sending a nude image.

The scale of self-generated CSAM

The IWF has reported an exponential increase in the volume of self-generated CSAM in recent years. In 2022, of the total 275,652 webpage actions, 92% were assessed as containing ‘self-generated’ imagery. This is a 27% increase in ‘self-generated’ reports from 2022, and a 561% increase from 2019.

Read the full report

This research supports the need for a wide range of tools and approaches when teaching children about sexual image-sharing. It highlights the need for tailored interventions and prevention messages by gender.

Explore the full study in the report below to see our guidance on improving prevention education in schools and on platforms.

What is Kick streaming?

Kick.com is a streaming platform that shares similarities with Twitch and YouTube Live. Just like other live streaming services, users can broadcast content to others in real-time. Users can also interact with creators through chat and other features.

An investigation into Kick by VoiceBox uncovered a range of issues. They included sexual harassment, violent language, hate and gambling endorsements among other things.

What is the minimum age requirement for Kick?

According to Kick’s Terms of Service, users must be at least 13 years old to use Kick (16 in the EU). Anyone under 18 must also have parent permission to use the platform.

There are no separate age requirements for streaming. However, the Community Guidelines state that streamed content cannot feature minors who are not in “the immediate presence of a parent or legal guardian.” As such, under-18s do not have permission to stream themselves without an adult present.

The reporting process

If you or your child sees content that goes against the Community Guidelines or Terms of Service, you should report it. You can report content and channels as well as users in chat.

Creators on Kick can report users. However, they have little control over their live streams. This is because Kick encourages an environment where users can “freely express themselves while respecting others.”

VoiceBox notes that Kick did not add the reporting function until a year after launch. The Community Guidelines also suggest a soft approach to enforcing rules.

How many people use Kick.com?

Since its launch in 2022, Kick.com’s usage has increased quickly. From January to April 2023, for example, viewership numbers increased by 404%.

One reason for this increase in traffic might relate to the amount streamers can make. While Twitch lets creators keep 50% of their profit, and YouTube lets creators keep 70%, Kick lets creators keep 95% of their earnings. As such, some creators on other platforms moved over to Kick, and so did their viewers.

Kick.com also believes in avoiding “knee-jerk reactions often associated with ‘cancel culture’.” While they affirm that this doesn’t mean creators can use “free speech” as “shield for hate speech,” this less strict approach might appeal to some.

Additionally, VoiceBox’s investigation into Kick’s streaming service found that some creators use openly hateful language without consequence. Some streamers also share adult-only content, they said, without labelling the content as such.

VoiceBox concludes that Kick is a natural result of popular platforms becoming more strict with their rules. Those who wish to share controversial material that will get them banned on Twitch or YouTube might see Kick as a preferred alternative.

Risks on Kick.com to watch out for

While Kick.com only has an age minimum of 13 in the United Kingdom, some of the content is not appropriate for under-18s. Additionally, the platform is still fairly new, which means it might lack robust safety features. In fact, there on-platform reporting function only came into play a year after launch.

If your child uses or wants to use Kick’s streaming service, you should know about the following risks.

Online gambling

The Kick streaming platform has close links with a popular gambling site. However, the gambling site does not own Kick. Instead, the owner of Kick also founded the gambling site. Some suggest that Kick is just another way to direct users towards the gambling site.

On the Kick.com website, gambling is a clear focus. For instance, the number one category under ‘Top Live Categories’ is ‘Slots & Casino’. In this section, creators must mark all content as for over-18s.

Under this category, creators live stream themselves engaging in gambling. While users will get a notification to say the content is for over-18s, there are no real restrictions. In fact, you can already see the chat before clicking ‘Start watching’.

Moreover, popular creators often show large wins, which might influence minors to try their hand at these same games.

Inappropriate content

VoiceBox said they found streams of creators video-chatting with girls in their underwear. Other streams included sexual gestures, movements and noises. Some live streams also feature sexually suggestive material without an 18+ warning. Other streams feature random chats on platforms similar to the now-closed Omegle.

This content is often:

  • easily accessible;
  • not always properly labelled; and
  • available to all users.

As such, children might stumble across adult content. More popular platforms have stricter rules, which reduces the likelihood of them coming across such content. However, this doesn’t mean the risk is 0%, so regular conversations and parental controls are vital.

Hate speech and hateful behaviours

Kick’s Community Guidelines warn against violence and hate speech on the platform. However, this is not always enforced. In fact, the same guidelines highlight that “context is critical in determining the occurrence of hate-speech.” As such, the platform might allow certain hate speech in the right context, though it’s unclear what that is.

VoiceBox’s investigation highlighted misogynistic and anti-LGBTQ+ language among popular creators and their followers. There were also cases of racist language as well as references to rape and suicide.

Learn more about hate in online spaces.

While this kind of behaviour is not only on Kick.com, it does seem to be more widespread than some similar platforms.

That being said, they recognise they are a newer platform. Therefore, they might introduce changes to counteract dangerous content down the line.

This is the image for: GET YOUR DIGITAL TOOLKIT

Stay on top of apps and platforms

Help keep your family safe on the platforms they use with personalised advice to manage risks and harm.

GET YOUR DIGITAL TOOLKIT

How to protect children and teens

If your child is interested in using Kick (or have already used it), there are things you can do to keep them safe.

Have an open conversation

Talk to them about their reasons for wanting to use Kick. Are they interested in streaming or viewing? What does it have that Twitch or YouTube Live don't that appeals to them? For those wishing to stream content, another platform might be a better option until they reach 18.

Update parental controls

If you don't want your child accessing Kick, you might need to update your parental controls. This is especially true if you've blocked specific sites. You can add Kick.com to your blocked list on broadband and mobile as well as parental control apps and web browsers. However, it's important to talk about doing this with your child as well.

Set clear rules and boundaries

If you do let your child use Kick, set clear guidelines. Remember that the Terms of Service say under-18s cannot feature in streams without a parent or legal guardian. If your child is watching streams, you might want to set usage of the platform in common areas only such as the kitchen. You might also want to watch streams together instead.

Suggest alternatives

If your child wishes to stream content, consider letting them create videos that aren't live streamed. This will help them to get familiar with streaming tools without misusing live streams. Additionally, consider different platforms that are more established like Twitch and YouTube to stream instead.

Was this useful?
Tell us how we can improve it

What is 'undress AI'?

Undress AI describes a type of tool that uses artificial intelligence to remove clothes of individuals in images.

While how each app or website works might vary, all of them offer this similar service. Although the manipulated image isn’t actually showing the victim’s real nude body, it can imply this.

Perpetrators who use undress AI tools might keep the images for themselves or might share them more widely. They could use this images for sexual coercion (sextortion), bullying/abuse or as a form of revenge porn.

Children and young people face additional harm if someone ‘undresses’ them using this technology. A report from the Internet Watch Foundation found over 11,000 potentially criminal AI-generated images of children on one dark web forum dedicated to child sexual abuse material (CSAM). They assessed around 3,000 images as criminal.

The IWF said that it also found “many examples of AI-generated images featuring known victims and famous children.” Generative AI can only create convincing images if it learns from accurate source material. Essentially, AI tools that generate CSAM would need to learn from real images featuring child abuse.

Risks to look out for

Undress AI tools use suggestive language to draw users in. As such, children are more likely to follow their curiosity based on this language.

Children and young people might not yet understand the law. As such, they might struggle to separate harmful tools from those which promote harmless fun.

Inappropriate content and behaviour

The curiosity and novelty of an undress AI tool could expose children to inappropriate content. Because it’s not showing a ‘real’ nude image, they might then think it’s okay to use these tools. If they then share the image with their friends ‘for a laugh’, they are breaking the law likely without knowing.

Without intervention from a parent or carer, they might continue the behaviour, even if it hurts others.

Privacy and security risks

Many legitimate generative AI tools require payment or subscription to create images. So, if a deepnude website is free, it might produce low-quality images or have lax security. If a child uploads a clothed image of themselves or a friend, the site or app might misuse it. This includes the ‘deepnude’ it creates.

Children using these tools are unlikely to read the Terms of Service or Privacy Policy, so they face risk they might not understand.

Creation of child sexual abuse material (CSAM)

The IWF also reported that cases of ‘self-generated’ CSAM circulating online increased by 417% from 2019 to 2022. Note that the term ‘self-generated’ is imperfect as, in most cases, abusers coerce children into creating these images.

However, with the use of undress AI, children might unknowingly create AI-generated CSAM. If they upload a clothed picture of themselves or another child, someone could ‘nudify’ that image and share it more widely.

Cyberbullying, abuse and harassment

Just like other types of deepfakes, people can use undress AI tools or ‘deepnudes’ to bully others. This could include claiming a peer sent a nude image of themselves when they didn’t. Or, it might include using AI to create a nude with features that bullies then mock.

It’s important to remember that sharing nude images of peers is both illegal and abusive.

How widespread is 'deepnude' technology?

Research shows that usage of these types of AI tools is increasing, especially to remove clothes from female victims.

One undress AI site says that their technology was “not intended for use with male subjects.” This is because they trained the tool using female imagery, which is true for most of these types of AI tools. With the AI-generated CSAM that the Internet Watch Foundation investigated, 99.6% of them also featured female children.

Research from Graphika highlighted a 2000% increase of referral link spam for undress AI services in 2023. The report also found that 34 of these providers received over 24 million unique visitors to their websites in one month. They predict “further instances of online harm,” including sextortion and CSAM.

Perpetrators will likely continue to target girls and women over boys and men, especially if these tools mainly learn from female images.

What does UK law say?

Until recently, those creating sexually explicit deepfake images were not breaking the law unless the images were of children.

However, the Ministry of Justice announced a new law this week that will change this. Under the new law, those creating sexually explicit deepfake images of adults without their consent will face prosecution. Those convicted will also face an “unlimited fine.”

This contradicts a statement made in early 2024. It said that creating a deepfake intimate image was “not sufficiently harmful or culpable that it should constitute a criminal offence.”

As recently as last year, perpetrators could create and share these images (of adults) without breaking the law. However, the Online Safety Act made it illegal to share AI-generated intimate images without consent in January 2024.

Generally, this law should cover any image that’s sexual in nature. This includes those that feature nude or partially nude subjects.

One thing to note, however, is this law relies on the intention to cause harm. So, a person who creates a sexually explicit deepfake must do so to humiliate or otherwise harm the victim. The problem with this is that it is fairly difficult to prove intent. As such, it might be difficult to actually prosecute parties creating sexually explicit deepfakes.

How to keep children safe from undress AI

Whether you're concerned about your child using undress AI tools or becoming a victim, here are some actions to take to protect them.

Have the first conversation

Over one-quarter of UK children report seeing pornography by age 11. One in ten say the first saw porn at 9. Curiosity might lead children to also seek out undress AI tools. So, before they get to that point, it's important to talk about appropriate content, positive relationships and healthy behaviours.

SEE PORN CONVERSATION GUIDES

Set website and app limits

Block websites and set content restrictions across broadband and mobile networks as well as devices and apps. This will reduce the chance of them stumbling upon inappropriate content as they explore online. Accessing these websites can be a part of a broader conversation with you.

CHOOSE A GUIDE

Build children's digital resilience

Digital resilience is a skill that children must build. It means they can identify potential online harm and take action if needed. They know how to report, block and think critically about content they come across. This includes knowing when they need to get help from their parent, carer or other trusted adult.

SEE DIGITAL RESILIENCE TOOLKIT

Resources for further support

Learn more about artificial intelligence and find resources to support victims of undress AI.

Was this useful?
Tell us how we can improve it

What leads children to sharing nudes?

The making, sending, sharing and storing of child-generated sexual images – or nudes – is one of the current key issues impacting on schools in terms of child safety and wellbeing.

All children with a smartphone or webcam-enabled laptop are at risk of involvement in this behaviour. However, my interactions with professionals and children suggest that there are a number of factors that may determine the nature of the child’s involvement and experience. These could include:

  • their sex;
  • social capital; and
  • vulnerabilities such as additional learning needs (ALN), adverse childhoose experiences (ACEs) or poor mental health needs.

This is reflected in government guidance published in February 2024 and issued to schools in England.

The literature on which sex is more likely to send nudes is varied, but my conversations with children suggest different perceptions towards sexting between boys and girls.

Many boys perceive sexting as a low-risk and low-consequence behaviour. However, many boys also use generic sexual images that they find online. As such, they are less likely to be identified or suffer social consequences should someone share the image further.

Images are often sent by boys as a way of gaining the girl’s trust; to make her feel that she has leverage should her image spread further. During discussions, however, children are generally able to recognise that, whilst a risky activity for both boys and girls, girls are most likely to suffer significant social consequences such as shaming, humiliation and ostracisation.

What school policies are in place?

Exact policies and protocols regarding the response of school leaders to incidents of making, sharing, sending or storing nudes will vary by local authority or academy trust.

For students under the age of 18, most schools must notify parents and carers. They must also report the case to the local safeguarding body. In some cases, school police liaison officers will also advise the school.

Training for relevant school staff will vary by region. Still, most professional training prioritises themes such as:

  • the legal implications of child-generated sexual images;
  • student wellbeing; and
  • assessment of the incident for factors related to safeguarding such as the relationship between the involved parties, coercion and age-disparity.

How could these policies improve?

Many policies, resources and interventions attempt to prevent the making, sharing, sending and storing of sexual images by children by focusing on communication related to not making and not sending.

To do so, however, is to misunderstand the power dynamic that often exists behind these activities.

Sexting sits alongside a number of other behaviours such as sextortion and ‘revenge porn’ that exist in cultural context. These things often normalise and glamorise sexual exploitation.

Messages about the legal and safety implications of child-generated sexual images are extremely important. However, teachers and other professionals who work with children and young people should include discourse about not requesting nudes.

Furthermore, they should create opportunities to explore the underpinning themes of exploitation and pornography, healthy relationships, and respect and care for others.

Resources for parents and carers

Learn about sexual image-sharing, exploitation and more to help start important conversations.

Was this useful?
Tell us how we can improve it

How to support neurodivergent young people

To help parents and carers support neurodivergent children as they game, we’ve created a series of videos and guides. Designed to help young people spot risks and take action online, this series encourages online safety to help neurodivergent young people benefit from games.

What’s in this report?

This research to support neurodivergent children and young people used an online survey and focus groups. 56% of parents who responded had at least one child on the autistic spectrum while 48% had at least one child with ADHD. Completed surveys totalled 480.

In the report, you can explore:

  • neurodivergent young people’s online behaviours and experience of video games;
  • feelings/attitudes from parents and young people towards online games;
  • challenges associated with playing online games for neurodivergent teens;
  • neurodivergent young people’s ability and confidence to stay safe online;
  • how to help these children enjoy benefits of playing games safely;
  • the relationship between neurodivergent young people and Roblox.

Download the research summary

Learn more about the research aims and methodology, or explore more of the key findings.

Download summary

Key findings

neurodivergent young people play video games offline or online.

say that gaming makes them happy.

make their own content online.

of neurodivergent young people play Roblox.

Download infographic

See the key findings from this research to help support neurodivergent young people who game.

Download infographic

Read the full research

Where the research comes from

In January 2024, we released our 3rd annual report, Children’s Wellbeing in a Digital World. Through this research, we found that both girls and parents tend to normalise girls receiving inappropriate comments, messages and images from males. One parent said this practise is “so standard, it’s not noteworthy.”

The research highlighted how teen girls in particular use the digital space for its many benefits while also receiving unwanted comments and attentions from boys and men.

This follow up research dives further into these experiences to identify what can be done to help girls enjoy time online without facing harassment.

Key findings

Hateful comments

Girls discuss receiving and seeing hateful comments based on appearance. These comments come exclusively from boys/men than other girls or women.

The pursuit of the 'perfect selfie'

1 in 5 girls say that time online makes them feel worried about their body shape or size. Boys also feel these pressures and the 'ideal male body', often related to intense gym sessions and supplement use.

Inappropriate contact from men and other strangers

Girls we spoke to recalled receiving 'weird' or 'creepy' messages and 'dick pics' on social media and messaging apps.

Content that fosters sadness

When it comes to social media, girls mention seeing content that makes them feel sad, which then suggests more content that makes them feel this way.

Online bullying

Some girls and parents report bullying across a range of platforms. One girl noted how it caused her anxiety and led to changes in her offline behaviour.

Social pressure

Some girls say they feel compelled to stay active on social media even if they don't want to. Pressures include friends wanting them to 'like' or engage with content, or wanting them to post their own content.

Read the full report