The 3 big risks of deepfake AI image generator

TL;DR

  • Deepfake AI tech evolving rapidly, creating near-indistinguishable fakes
  • Deepfakes enable spread of misleading information and fake news
  • Minors, celebrities, and public figures vulnerable to abuse
  • Undermines trust in social media and online content
  • Blockchain and advanced verification needed to combat deepfakes
  • Deepfake AI image generators gaining popularity, especially with youth

AI deepfake technologies evolved at a crazy pace last 3 months. We went from ‘no need to be afraid you can see it’s a deepfake a mile away’ to ‘Hong Kong police revealed that crooks had used deepfakes to steal around $26 million (around €24 million) from a multinational company.’

We’re reaching a stage where they’re nearly undisguised from reality, but the truth is that a large portion of the population has already reached that stage.

Most of the time, big statements about the ‘safety’ of a technology are made by people building it, confirmation bias is real, and we shouldn’t let it blur the reality:

People are not equipped to discern an AI deepfake video from a real video. They don’t know clues to look after.

I don't see a slowing trend with the rise of deepfake AI image generators. It’s frightening how easy it becomes to create fake images and videos of people with the right tone of voice, facial traits, and nonverbal movement.

Deepfake AI image generators are also gaining a lot of popularity with the younger generation. They see in them a way to add fun and interaction to the way they communicate with each other.

This gain in popularity materializes by creating realistic social media content, virtual influencers, and personalized videos. Unfortunately, it’s not always the person with the best intention that does that…

In this article, I am going to share 3 risks that deepfake represents

Risk 1 of deepfake AI image generator: Spread of fake news

As the line between real and fake becomes thinner, it will become exponentially harder to discern what is true from what is not without a watermark.

People with bad intentions can use them to publish misleading information:

  • At a personal level
  • At a local level
  • At a country level
  • At an international level

There’s no limits to what you could do:

  1. Share a fake picture of the Eiffel tower burning
  2. Share a fake picture of a political figure bribing a terrorist
  3. Send your secret crush a picture of her boyfriend with another girl

Just like people when the Internet and phones were invented, we fear what we do not completely understand or can control. It’s the same with every generation, but the tech itself has nothing good or evil.

The malicious usage potential has always presented a risk. This is why regulations and protective technologies will be needed.

If you’re looking to build a billion-dollar company, seek the authentification and verification of any content published online.

Risk 2: Deepfake AI image generator and potential abuse

New technologies always represent a higher risk for some minorities.

This time, it’s the following who will need an additional layer of protection:

  • Minors
  • Celebrities
  • Public figures

Revenge porn, financial fraud, and political manipulation are the 3 biggest abusive uses of AI we’ll see in the next decade.

We just crossed a new level of vulnerability. The quality of the models, the information asymmetries, and the ease of access to the deepfake AI image generator allow nearly anyone to create near-indistinguishable deepfakes.

Risk 3 of deepfake AI image generator: Erosion of trust in digital media

In the last 5 years, we experienced the rise of 100% social media native news sources. The new generations don’t watch TV, and even the elderly spend more time on Facebook, TikTok, and Instagram.

But what happens when you can no longer trust what you see on socials?

As the barrier to entry gets pushed down, there’s more chance of falling on a completely AI-generated video or image spreading fake news on social media than elsewhere.

We might experience one of these 3 scenarios:

  1. The rise of Web3 social media (Blockchain is a solution to the abundance of generated stuff)
  2. A return to TV dependence to get verified news (I am not trying to open a debate here)
  3. The development of super-advanced methods of verification for the content published online

On the other hand, the constant need for the latest news to capture attention may motivate news companies to spread fake news or, even worse, use deepfake AI image generators to spread fake news.

An exciting decade coming our way.

On a side note, if you’re looking to create deepfake videos of yourself and not using a random avatar, you can check out what we’ve built at Argil: Here

We aim to empower anyone to create quality video content at scale with their own image, voice, and preferred video editing structure without a camera.

To jump on our waitlist its: Here

Othmane Khadri