What is a deepfake?

Deepfake technology is a type of artificial intelligence that can spread misinformation and disinformation. Stay informed to help protect children and young people online.

A mum holds a tablet, surrounded by safety icons.

Guidance at a glance

Get quick insight and advice about deepfake technology.

What does it mean?

Deepfakes are videos or audio recordings that manipulate a person's likeness. They then appear to do/say something they never did.

Learn about deepfakes

What are the risks?

Deepfakes have the potential to spread false information or propaganda. They can also play a role in cyberbullying and scams.

Learn about harms

How to protect children

Helping children develop critical thinking by having regular conversations is a key part of keeping them safe from harm.

See safety tips

What is a deepfake?

The two main types of deepfakes are video and audio. Video deepfakes manipulate the appearance of a person while audio deepfakes manipulate their voice.

Video and audio deepfakes often work together to create misleading content. The people who create this fake content might have innocent intentions. For example, they might make a video of a celebrity doing a funny dance to make people laugh. However, even the best intentions can lead to misinformation and confusion.

What are video deepfakes?

Video deepfakes are the most common type. People who create these videos use artificial intelligence to replace someone’s face or body in existing video footage. Examples include making a celebrity say something controversial or a news anchor reporting fabricated stories.

In some video deepfakes, a voice actor will imitate the celebrity or original audio to provide the new voice. Alternatively, the creator might also use voice cloning.

What is voice cloning?

Voice cloning, or audio deepfakes, manipulate voices instead of video to sound like someone else. An increasingly common scam uses voice cloning to target people. For parents, they might receive a frantic call from their child who needs money for a certain purpose immediately. These scams are extremely convincing and have resulted in financial loss.

Someone could also use an audio deepfake to bully another person. For example, they could imitate a classmate’s voice to make it seem like they said something they didn’t.

Regardless of the type of deepfake, it’s important to remember that someone needs to use an AI tool to create it. Deepfake creators are not always clear, but AI does not automatically generate this content.

What are the potential harms?

Deepfakes can affect people in a range of ways and can leave children and young people open to online harm.

Children and young people might struggle to recognise fake videos and audio, especially as technology becomes more sophisticated. Additionally, the increased accessibility to AI tools might mean that more people can create deepfakes. This can therefore increase the reach of potential harms.

False information and propaganda 

Online users can use deepfakes to:

  • spread false information;
  • ruin trust in public figures; or
  • manipulate conversations around politics and other important issues.

Children are still developing critical thinking skills, so they are particularly vulnerable to believing this kind of information.

Reputation damage

Some reports link deepfakes to revenge porn.In these cases, the perpetrator adds the victim into compromising content such as pornographic videos or imagery.

The perpetrator might then use these deepfakes to coerce victims. This could include demanding payment or real images to stop them from sharing the deepfakes more widely. Learn more about this practise with our guide to sextortion.

Cyberbullying, harassment and abuse

Perpetrators might also use deepfakes to bully others by creating videos meant to mock, intimidate or embarrass them.

The nature of deepfakes might make the bullying more severe for the victim. It might even border on abusive behaviour. Learn what child-on-child abuse might look like.

A 2023 Internet Watch Foundation (IWF) report warned of increasing AI-generated child sexual abuse material (CSAM). They identified over 20,000 of these images posted to one dark web CSAM forum over a one-month period. They judged more than half of these as “most likely to be criminal.”

While this number doesn’t include deepfakes, the IWF says “realistic full-motion video content will become commonplace.” They also note that short AI-generated CSAM videos already exist. “These are only going to get more realistic and more widespread.”

Financial loss and scams

Some audio deepfakes or voice cloning scams cause victims to lose money. Public figures have also had their likeness used to promote scam investments.

One example is of YouTuber Mr Beast, who seemed offer his followers new iPhones. However, it wasn’t actually him. YouTube is popular among children and teens. So, deepfakes that mimic their favourite creators can leave them open to these scams.

Examples of deepfake scam videos
Display video transcript
Well, let's stay with technology because artificial intelligence is fueling a boom in cyber crime. The cost is expected to hit 8 trillion dollars this year, more than the economy of Japan according to one estimate by cyber security experts. The world's biggest YouTuber is among those who've had their video image manipulated by AI to promote a scam, and even BBC presenters are not immune.

Have a look at this: British residents no longer need to work, that's the announcement made by our guest today, Elon Musk. Who will unveil a new investment project while the connection is going on. I will tell you more about this project that opens new opportunities for British people to receive a return on investment. More than three billion dollars were invested in the new project, and it is already up and running at the moment.

Strange, isn't it? Looks like me, sounds like me, you may say. It's kind of hard, isn't it, to get your head around this? And so I spoke to Stephanie Hair, she's an independent technology expert, I should say.

Stephanie Hair, an independent technology expert, is interviewed about the difficulty of spotting deepfakes.

``Many people find it so difficult to spot these things,`` Hair says. ``And you're not the first and I don't think you're going to be the last, unfortunately, because there's nothing really to stop this from happening. There's no regulation really to either hold anybody to account. I'm not sure who you would get any joy from if you wanted to sue for example. Now, we got onto Facebook and said you need to take this down, it is fake and it has done... they have done that, Meta has done that since. However, uh, there are plenty more deep fake videos out there pointing viewers to scams and the worry is people are actually parting with their own money because they believe they're genuine and this is the worry, how do people tell between what is real and what is not?``

``Honestly, I'm not sure that trying to tell from the sort of technical limitations because we were able to see even with your video that there were certain things that were not quite right. I made it very clear that it wasn't legit. What you really want to be doing is asking yourself if it sounds too good to be true, it probably is. If it seems a bit weird, it probably is. Uh, there is no such thing as a free lunch.``

``There certainly isn't and I've uh tweeted or shared an article about this written by the BBC that gives you top tips on how to spot deep fake videos.``

How to protect children from deepfake harms

The best way to keep children safe online is by giving them the tools to spot harm and get support. Here's how you can help them navigate deepfakes.

Open communication is key

Start conversations about online safety early and often. Discuss the concept of deepfakes. Explain how people use AI to create them and the potential dangers they pose.

Develop children's media literacy

Develop your child's critical thinking skills by encourage them to question what they see and hear online. Teach them to look for clues that a video might be fake. Examples include unnatural movements or inconsistencies in audio and video quality.

LEARN ABOUT FALSE INFORMATION

Set up boundaries and settings

On your home broadband or mobile network, set parental controls to limit exposure to inappropriate content. Do the same with apps your child uses. Then, work together to set boundaries of where and when to use devices.

GET FAMILY AGREEMENT TEMPLATE

Talk about digital citizenship

Help your child develop positive online behavior. Discourage them from creating or sharing fake content featuring others, even if they only do so as a joke.

SEE TOP INTERNET MANNERS

Show them how to verify sources

Discuss with your child the importance of verifying information before sharing it. Encourage them to check for reliable sources and fact-checking websites before believing anything they see online.

Keep things private

Talk to your children about online privacy and explain the importance of controlling their online footprint. Remember that anyone might use public images of your child for any purpose. So, limit the images you share of your child and teach them to limit the photos they share of themselves.

Latest articles and guides on AI

Find more support with deepfakes and other types of AI content with these resources.

Was this useful?
Tell us how we can improve it