[EN] Deepfake

Ola,


Recently I was going through a document published by U.S. Department of Defense regarding the way Deepfake will try to disturb our reality and our trust in some sort of media formats. You’ll find it at the bottom of the page, but I’ll try to summarize the idea as best as I can.

General

The Deepfake Phenomenon: Anything you see online might no longer be real. You’ve grasped the uncertainty, right? That’s basically the idea: initially, to create uncertainty and, subsequently, to lead to misinformation.

In practical terms, “Deepfake” translates somewhat like this: any passerby on the street can create videos with your face, can claim to be you, and, if they have recordings of your voice, can reuse your voice in videos and calls – the list goes on. This hasn’t been possible until recently, but with the help of AI, it can now be achieved much more quickly, inexpensively, and without much technical knowledge. The problem is that the possibilities and scenarios are unlimited and are not confined to just fun videos among friends.

Clear Examples

  • A fake video featuring the face and voice of Volodymyr Zelensky circulated in the Ukrainian press during the initial weeks of the war, wherein he told soldiers to surrender their arms and submit to Russia.
  • Elon Musk’s face and voice were used in a video recorded by attackers to promote a scam related to cryptocurrency platforms.
  • In the UK, a mother submitted false evidence in a trial, a recording of the children’s father’s voice made using software, to demonstrate that he didn’t deserve custody of the children due to abusive behavior.
  • Deepfake is used to seize photos of young women online and create pornographic production using their faces.
  • In May 2023, an attacker targeted a company using deepfake techniques to imitate the company’s CEO. A manager within the company was contacted via WhatsApp and invited to an audio conversation with the attacker, who claimed to be the CEO. The voice resembled the CEO’s, and the image and background used probably matched an existing image from a few years ago, specifically the background of the CEO’s residence.
  • In May 2023, an unknown malevolent actor targeted a company aiming for financial gain, using a mix of synthetic audio, video, and text messages. The actor, mimicking the voice of a company executive, contacted a company member through a low-quality audio call via WhatsApp. Later, the actor suggested a Teams meeting, and the screen appeared to show the executive in his office. The connection was very poor, so the actor recommended switching to text messages and continued to urge the target to transfer money. The target became very suspicious and cut off communication at this point. The same executive has also been imitated through text messages on other occasions by attackers.

Possible Future Attacks

  • In the business realm, numerous video and audio attempts wherein the company CEO tries to convince someone inside the company to provide certain information or access to internal resources. But hey, not only the CEO, virtually anyone in the company can become a target – but usually, top management is targeted first as they might possess more authority. You don’t expect the CEO to call you daily to ask for a favor.
  • Our parents, who are the most susceptible victims because they have less contact with technology, might receive video and audio messages from potential attackers pretending we are in danger and need either personal data or sums of money.
  • Pretty much all that is phishing at the email level now will be able to be transferred much more realistically into videos and calls that imitate our identity.
  • Public space videos and audio recordings of us or others just to alter the opinions of those around us or mislead them. I don’t even want to think about how this will look in electoral campaigns.

Consequences

  • It will be difficult to trust what until now we believed offered some kind of certainty – live recordings, some photos, audio messages, etc.
  • Video/audio evidence under certain circumstances will be hard to prove as true.
  • We will learn to protect ourselves, but extra effort will be deployed to protect those around us.
  • This can also be exploited in reverse, with something real being accused of being a part of deepfake.

What to Do?

  • Understand the situation and pay attention to details.
  • Make those around us aware that these things exist and will increasingly gain traction.
  • Don’t react impulsively and validate information/calls/videos if the situation is delicate (especially in a corporate environment).
  • Wait for governments to invest in measures through which fabricated content can be recognized by other software programs.

Congress of the United States, 2018: “By blurring the line between fact and fiction, deepfake technology could undermine public trust in recorded images and video as objective depictions of reality.”

Europol: “90% of online content may be synthetically generated by 2026” / “societal confusion about which information sources are reliable.”

That being said, I’ll let you guys have fun with the resources below, and until next time… stay safe 😛

Alex

https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF


Posted

in

by

Tags: