What is 'undress AI'?

Undress AI describes a type of tool that uses artificial intelligence to remove clothes of individuals in images.

While how each app or website works might vary, all of them offer this similar service. Although the manipulated image isn’t actually showing the victim’s real nude body, it can imply this.

Perpetrators who use undress AI tools might keep the images for themselves or might share them more widely. They could use this images for sexual coercion (sextortion), bullying/abuse or as a form of revenge porn.

Children and young people face additional harm if someone ‘undresses’ them using this technology. A report from the Internet Watch Foundation found over 11,000 potentially criminal AI-generated images of children on one dark web forum dedicated to child sexual abuse material (CSAM). They assessed around 3,000 images as criminal.

The IWF said that it also found “many examples of AI-generated images featuring known victims and famous children.” Generative AI can only create convincing images if it learns from accurate source material. Essentially, AI tools that generate CSAM would need to learn from real images featuring child abuse.

Risks to look out for

Undress AI tools use suggestive language to draw users in. As such, children are more likely to follow their curiosity based on this language.

Children and young people might not yet understand the law. As such, they might struggle to separate harmful tools from those which promote harmless fun.

Inappropriate content and behaviour

The curiosity and novelty of an undress AI tool could expose children to inappropriate content. Because it’s not showing a ‘real’ nude image, they might then think it’s okay to use these tools. If they then share the image with their friends ‘for a laugh’, they are breaking the law likely without knowing.

Without intervention from a parent or carer, they might continue the behaviour, even if it hurts others.

Privacy and security risks

Many legitimate generative AI tools require payment or subscription to create images. So, if a deepnude website is free, it might produce low-quality images or have lax security. If a child uploads a clothed image of themselves or a friend, the site or app might misuse it. This includes the ‘deepnude’ it creates.

Children using these tools are unlikely to read the Terms of Service or Privacy Policy, so they face risk they might not understand.

Creation of child sexual abuse material (CSAM)

The IWF also reported that cases of ‘self-generated’ CSAM circulating online increased by 417% from 2019 to 2022. Note that the term ‘self-generated’ is imperfect as, in most cases, abusers coerce children into creating these images.

However, with the use of undress AI, children might unknowingly create AI-generated CSAM. If they upload a clothed picture of themselves or another child, someone could ‘nudify’ that image and share it more widely.

Cyberbullying, abuse and harassment

Just like other types of deepfakes, people can use undress AI tools or ‘deepnudes’ to bully others. This could include claiming a peer sent a nude image of themselves when they didn’t. Or, it might include using AI to create a nude with features that bullies then mock.

It’s important to remember that sharing nude images of peers is both illegal and abusive.

How widespread is 'deepnude' technology?

Research shows that usage of these types of AI tools is increasing, especially to remove clothes from female victims.

One undress AI site says that their technology was “not intended for use with male subjects.” This is because they trained the tool using female imagery, which is true for most of these types of AI tools. With the AI-generated CSAM that the Internet Watch Foundation investigated, 99.6% of them also featured female children.

Research from Graphika highlighted a 2000% increase of referral link spam for undress AI services in 2023. The report also found that 34 of these providers received over 24 million unique visitors to their websites in one month. They predict “further instances of online harm,” including sextortion and CSAM.

Perpetrators will likely continue to target girls and women over boys and men, especially if these tools mainly learn from female images.

What does UK law say?

Until recently, those creating sexually explicit deepfake images were not breaking the law unless the images were of children.

However, the Ministry of Justice announced a new law this week that will change this. Under the new law, those creating sexually explicit deepfake images of adults without their consent will face prosecution. Those convicted will also face an “unlimited fine.”

This contradicts a statement made in early 2024. It said that creating a deepfake intimate image was “not sufficiently harmful or culpable that it should constitute a criminal offence.”

As recently as last year, perpetrators could create and share these images (of adults) without breaking the law. However, the Online Safety Act made it illegal to share AI-generated intimate images without consent in January 2024.

Generally, this law should cover any image that’s sexual in nature. This includes those that feature nude or partially nude subjects.

One thing to note, however, is this law relies on the intention to cause harm. So, a person who creates a sexually explicit deepfake must do so to humiliate or otherwise harm the victim. The problem with this is that it is fairly difficult to prove intent. As such, it might be difficult to actually prosecute parties creating sexually explicit deepfakes.

How to keep children safe from undress AI

Whether you're concerned about your child using undress AI tools or becoming a victim, here are some actions to take to protect them.

Have the first conversation

Over one-quarter of UK children report seeing pornography by age 11. One in ten say the first saw porn at 9. Curiosity might lead children to also seek out undress AI tools. So, before they get to that point, it's important to talk about appropriate content, positive relationships and healthy behaviours.

SEE PORN CONVERSATION GUIDES

Set website and app limits

Block websites and set content restrictions across broadband and mobile networks as well as devices and apps. This will reduce the chance of them stumbling upon inappropriate content as they explore online. Accessing these websites can be a part of a broader conversation with you.

CHOOSE A GUIDE

Build children's digital resilience

Digital resilience is a skill that children must build. It means they can identify potential online harm and take action if needed. They know how to report, block and think critically about content they come across. This includes knowing when they need to get help from their parent, carer or other trusted adult.

SEE DIGITAL RESILIENCE TOOLKIT

Resources for further support

Learn more about artificial intelligence and find resources to support victims of undress AI.

Was this useful?
Tell us how we can improve it

What leads children to sharing nudes?

The making, sending, sharing and storing of child-generated sexual images – or nudes – is one of the current key issues impacting on schools in terms of child safety and wellbeing.

All children with a smartphone or webcam-enabled laptop are at risk of involvement in this behaviour. However, my interactions with professionals and children suggest that there are a number of factors that may determine the nature of the child’s involvement and experience. These could include:

  • their sex;
  • social capital; and
  • vulnerabilities such as additional learning needs (ALN), adverse childhoose experiences (ACEs) or poor mental health needs.

This is reflected in government guidance published in February 2024 and issued to schools in England.

The literature on which sex is more likely to send nudes is varied, but my conversations with children suggest different perceptions towards sexting between boys and girls.

Many boys perceive sexting as a low-risk and low-consequence behaviour. However, many boys also use generic sexual images that they find online. As such, they are less likely to be identified or suffer social consequences should someone share the image further.

Images are often sent by boys as a way of gaining the girl’s trust; to make her feel that she has leverage should her image spread further. During discussions, however, children are generally able to recognise that, whilst a risky activity for both boys and girls, girls are most likely to suffer significant social consequences such as shaming, humiliation and ostracisation.

What school policies are in place?

Exact policies and protocols regarding the response of school leaders to incidents of making, sharing, sending or storing nudes will vary by local authority or academy trust.

For students under the age of 18, most schools must notify parents and carers. They must also report the case to the local safeguarding body. In some cases, school police liaison officers will also advise the school.

Training for relevant school staff will vary by region. Still, most professional training prioritises themes such as:

  • the legal implications of child-generated sexual images;
  • student wellbeing; and
  • assessment of the incident for factors related to safeguarding such as the relationship between the involved parties, coercion and age-disparity.

How could these policies improve?

Many policies, resources and interventions attempt to prevent the making, sharing, sending and storing of sexual images by children by focusing on communication related to not making and not sending.

To do so, however, is to misunderstand the power dynamic that often exists behind these activities.

Sexting sits alongside a number of other behaviours such as sextortion and ‘revenge porn’ that exist in cultural context. These things often normalise and glamorise sexual exploitation.

Messages about the legal and safety implications of child-generated sexual images are extremely important. However, teachers and other professionals who work with children and young people should include discourse about not requesting nudes.

Furthermore, they should create opportunities to explore the underpinning themes of exploitation and pornography, healthy relationships, and respect and care for others.

Resources for parents and carers

Learn about sexual image-sharing, exploitation and more to help start important conversations.

Was this useful?
Tell us how we can improve it

How to support neurodivergent young people

To help parents and carers support neurodivergent children as they game, we’ve created a series of videos and guides. Designed to help young people spot risks and take action online, this series encourages online safety to help neurodivergent young people benefit from games.

What’s in this report?

This research to support neurodivergent children and young people used an online survey and focus groups. 56% of parents who responded had at least one child on the autistic spectrum while 48% had at least one child with ADHD. Completed surveys totalled 480.

In the report, you can explore:

  • neurodivergent young people’s online behaviours and experience of video games;
  • feelings/attitudes from parents and young people towards online games;
  • challenges associated with playing online games for neurodivergent teens;
  • neurodivergent young people’s ability and confidence to stay safe online;
  • how to help these children enjoy benefits of playing games safely;
  • the relationship between neurodivergent young people and Roblox.

Download the research summary

Learn more about the research aims and methodology, or explore more of the key findings.

Download summary

Key findings

neurodivergent young people play video games offline or online.

say that gaming makes them happy.

make their own content online.

of neurodivergent young people play Roblox.

Download infographic

See the key findings from this research to help support neurodivergent young people who game.

Download infographic

Read the full research

Where the research comes from

In January 2024, we released our 3rd annual report, Children’s Wellbeing in a Digital World. Through this research, we found that both girls and parents tend to normalise girls receiving inappropriate comments, messages and images from males. One parent said this practise is “so standard, it’s not noteworthy.”

The research highlighted how teen girls in particular use the digital space for its many benefits while also receiving unwanted comments and attentions from boys and men.

This follow up research dives further into these experiences to identify what can be done to help girls enjoy time online without facing harassment.

Key findings

Hateful comments

Girls discuss receiving and seeing hateful comments based on appearance. These comments come exclusively from boys/men than other girls or women.

The pursuit of the 'perfect selfie'

1 in 5 girls say that time online makes them feel worried about their body shape or size. Boys also feel these pressures and the 'ideal male body', often related to intense gym sessions and supplement use.

Inappropriate contact from men and other strangers

Girls we spoke to recalled receiving 'weird' or 'creepy' messages and 'dick pics' on social media and messaging apps.

Content that fosters sadness

When it comes to social media, girls mention seeing content that makes them feel sad, which then suggests more content that makes them feel this way.

Online bullying

Some girls and parents report bullying across a range of platforms. One girl noted how it caused her anxiety and led to changes in her offline behaviour.

Social pressure

Some girls say they feel compelled to stay active on social media even if they don't want to. Pressures include friends wanting them to 'like' or engage with content, or wanting them to post their own content.

Read the full report

About this submission

We have focused our response to the Call for Evidence where our data and engagement with families lend greatest insights. This is in response to questions around public attitudes to pornography, where we provide granular detail from our latest data on parents’ and teachers’ attitudes to pornography, and around education resources for both children and parents on the potential harms of viewing pornography (in particular content which depicts or promotes violence towards women and girls).

About our data

Internet Matters conducts an extensive research programme which is designed to provide us with insight into families’ experiences of digital platforms and technologies. To inform our response to this consultation, we are drawing upon our two major data sources on the prevalence and impact of online harms:

  • We conduct a twice-yearly ‘digital tracker survey’ with a nationally representative sample of over 2,000 parents and 1,000 children aged 9-16. In this survey, we ask children and parents about attitudes towards, and children’s exposure to, sexual content and pornography.
  • Our flagship Digital Wellbeing Index is an annual study designed to assess the impact of digital technology on children’s lives – both positive and negative – and the factors which shape children’s outcomes. The study is based on a four-dimensional framework of digital wellbeing (developmental, emotional, physical and social) developed in collaboration with the University of Leicester. Findings are based on a detailed household survey of 1,000 children and their parents.

We also conduct regular deep dive research projects on particular themes, including emerging tech (examples include the metaverse and cryptocurrencies) and thematic issues (examples include vulnerability, online misogyny and image-based abuse).

In 2019 we published deep-dive into the views of parents and caregivers on online pornography and age verification (designed to coincide with the Government’s first move to pass AV laws on pornography sites via the Digital Economies Act).2 While recognising time has passed since this research – not least the periods of Covid lockdown and passage of Online Safety Act – our ongoing research (as described above) shows that there is sustained parental concern in this area.

Key points of this submission

  • Parents are very concerned about their children being exposed to online pornography.
  • Parents of vulnerable children and children who are eligible for Free School Meals (FSM) are particularly concerned.
  • Dads are more concerned than mums about online pornography.
  • Sources of parental concern include impacts on sexual behaviours, attitudes to girls (and women) and impacts on self-esteem and body image.
  • Teachers are also worried about the impacts of children viewing online pornography, but many feel unsure about how to approach teaching on the subject.
  • Wider research finds that children’s experiences of RSHE are – on the whole – negative.
  • Without formalised support (for example, through the Department for Education (DfE), Department for Science, Innovation and Tech (DSIT), and Ofcom), it can be very difficult for parents and teachers to determine the quality and validity of information from available resources.
  • We recommend a greater focus from Government on driving parental controls awareness. The DfE should also conduct a wider review of online safety teaching in schools. Lastly, the DfE should strengthen guidance around teaching online pornography in RSHE.

More to explore

See more in research and policy from Internet Matters.

What is Y99?

Y99 is an online chat room service from India that lets people anonymously chat with others. It mimics the chat rooms of the 1990s and early 2000s. Users only need a username to talk with others. Users can then enter chat rooms to talk with strangers via text or voice, or they can share photos and links to videos.

Some chat rooms require account verification instead of just a username. This means users must sign up and verify their account to join a chatroom.

In addition to chat rooms, users can privately chat with strangers. They can do this by choosing a user and starting a chat, or by using the ‘random’ feature on Y99. Like Omegle, the site then randomly matches you to another user.

Screenshot of creating a new chatroom on Y99Users can also create their own public or private chat rooms. They can choose to add a password or untick the option to ‘submit to Y99’s Search’.

Like chat rooms of the ’90s and ’00s, the website is chaotic and filled with ads, users and potential bots.

Why users might like Y99

Y99 has the nostalgic factor for some older users who chat on it. However, teens might like the ‘retro’ style, variety of chat rooms and the anonymity they can’t get from regular social media.

Chat rooms cover a variety of themes and topics — from ‘Teens Only’ to LGBT rooms and support for depression or other issues. As such, teens might find a community they identify well with. However, other options like Tumblr or Discord can offer similar, safer experiences.

Is Y99 safe for kids?

Y99’s safety features are minimal. It includes reporting and blocking features but little else.

The site doesn’t have clear age requirements to use the site. However, its privacy policy states that it does not ‘knowingly’ collect personal information from children under 13.

This does not mean the site is appropriate for children over the age of 13. In fact, it might mean the opposite. Y99 might collect personally identifiable information from anyone aged 13 or older. Additionally, it’s easy for users to click on the wrong thing and end up following an ad.

While Y99 does feature some age-restricted rooms for ‘teens only’, a user need only lie about their age to gain access. It’s impossible to know for sure that users in the chatroom are teenagers.

For the above reasons and other potential issues, we recommend those under the age of 18 to avoid using Y99. Instead, they should use more well-established communication platforms and social network sites.

See age-appropriate social networking platforms.

What are the potential risks?

Like Omegle and other anonymous apps, Y99 presents potentially harmful risks to children. These are some of them.

Harmful stranger contact

Children use the digital space to socialise and keep in contact with friends. The popularity and nature of social media and multiplayer video games means children regularly come into contact with strangers. While not every stranger they come into contact with is harmful, some certainly are.

In an anonymous platform like Y99, the risk of harmful contact increase. Even though some chat rooms blur images, clicking on them could reveal pornographic or violent content. Additionally, the private chats increase the risk of grooming, sextortion or other coercive behaviours.

Vulnerable children could face greater risk if they struggle to recognise harmful behaviours from strangers.

Scams or data breaches

Y99 is a simple website that features many ads from third parties. Some of these ads look like chat windows and could mislead users. Clicking on these ads will navigate users from the site and could lead to phishing scams or data breaches.

Additionally, some strangers might approach users with promises of ‘get-rich-quick’ schemes, fake competitions, false offers and more.

Learn about common scams that target teens online.

Pornographic, violent and other inappropriate content

The instant message style of chat along with the number of users might result in exposure to inappropriate content. This could include images, video links or voice notes.

Y99’s rules warn against this kind of content. However, it seems that moderation is the responsibility of staff in each chatroom. It’s unclear what Y99’s moderation on a widescale is like.

Get personalised online safety advice

Stay on top of online harms and safety risks with your digital toolkit.

GET YOUR TOOLKIT

Conversations to have with kids

If your child uses or wants to use Y99, it’s important to speak with them about it. Additionally, if you do let your child use the platform, you should moderate the use closely and regularly check in.

When discussing Y99 with your child, here are some questions to ask.

  • Why do you want to use Y99? Understanding where their interest comes from can help you offer alternatives. Tumblr, Discord and even Reddit could offer alternatives to finding a community. If they’re more interested in the ‘instant’ aspect, they might be able to find an active Discord server to do the same.
  • What are the risks of using Y99? Let young people explain the risks and acknowledge potential harm. If they’re unable to recognise these risks, they might not be prepared to use the site. This kind of conversation can also help you identify missing knowledge.
  • What would you do if…? Talk about potentially harmful scenarios like receiving an inappropriate image or feeling uncomfortable because of a stranger. Explore the actions they know to take. Again, if they’re not sure, this might not be the right site for them.
  • Is it okay if…? Think about scenarios such as if a stranger asks for personal information or to meet on a different platform. Talking about what is and isn’t okay can help young people spot unhealthy behaviour immediately.
Was this useful?
Tell us how we can improve it

About this submission

We are delighted to contribute our thoughts and evidence to the consultation, in support of Ofcom in its new role as online safety regulator.

Our own research demonstrates that access to pornography is one of the areas of most acute concern for both parents and teachers – who are currently the frontline of preventing children from accessing harmful content, and handling the fallout if and when children do view pornography. Action to protect children, through robust, reliable and ubiquitous age assurance on online pornography cannot come soon enough.

there are a number of key areas where we believe that the draft guidance could be significantly strengthened, and therefore more robustly protect children from pornography. It is crucial to get the approach right, now, at the outset of the regime – to ensure compliance across the adult sector.

In this submission, we draw on our extensive research base, in particular our digital experiences tracker which is a twice-yearly survey of a nationally representative sample of 1,000 children aged 9-17 and 2,000 parents and which allows us enormous insight into the online lives of families in the UK.

Summary of submission

  • Parents and teachers are very concerned about the impact of pornography exposure on children. Age assurance measures on pornography cannot come into force quickly enough.
  • It is our general observation that little – so far – has been done to ensure that children and parents are informed about what Ofcom’s regulation of online services means for them.
  • There are a number of important omissions in the current guidance, including support and advice for children who attempt and fail age-checks; o continuous age checks on platform; a clear definition of ‘normally encounter’; and stronger measures around VPN use.
  • More guidance is needed for the role of app stores in providing interoperable, secure and privacy-preserving age-assurance methods.
  • Record-keeping duties on pornography platforms should extend to a duty to ensure that records are presented in a clear and accessible way to children and parents.

More to explore

See more in research and policy from Internet Matters.

How do you use AI in daily life?

Like many parents, Swazi Kaur didn’t think that AI was a big part of family life with her two sons. However, once she looked more closely, she was surprised to find how many of the everyday technologies her children use incorporate AI technology.

“Things like Alexa and Siri, we use every day. But it’s also AI in apps like Calm for meditation and Duolingo for languages,” says Swazi.

The children use Alexa for daily questions about animals as well as for their homework. Alexa helps them to research questions or solve complex puzzles and sums.

Learn how to get the most out of voice assistant technology >>

In the evening, both children have used Calm and Alexa for relaxing meditations. “Both of my sons are neurodivergent and have special educational needs, so using these apps for end of day calm down time is really helpful, and they each have their own device in their bedroom,” says Swazi. “It gives them ownership of what they use it for.”

Using generative AI for school

Swazi’s eldest son uses ChatGPT to research essay topics online. “He can ask questions and use research to find information,” she says. “We have talked about the limitations of the technology, and that you still need to fact check information that you get on AI apps, and not just assume it’s correct.”

Overall, Swazi says that AI apps are helpful to support learning. For example, Duolingo has been a big help in supporting her son’s secondary school language lessons. To date, the school hasn’t provided any rules or guidance around AI, Swazi says, outside of the general internet policy. She says, “they don’t seem to have updated the policy to look at AI. It just focuses on appropriate sites and usage, but I will be speaking to school, because I think it’s important.”

Suggestions to keep kids safe

Currently, neither boy uses AI in the classroom, which Swazi finds reassuring. “We use parental controls to make sure they’re looking at age-appropriate content, and I keep a watchful eye on what they’re accessing.”

If the boys see something inappropriate, Swazi talks to the boys. “I don’t stand over them as they get older, but those conversations are how I keep them safe. The advice I’d give is to find out how to use settings on phones and devices to ensure you have parental controls, downtime limits and limits on things like apps they can use.”

Get personalised online safety advice

Tell us a little about you and we’ll provide you with a tailored resource pack to support your family's online safety.

GET YOUR DIGITAL TOOLKIT
Was this useful?
Tell us how we can improve it