Close

Decoding Deepfakes: Navigating the Digital Disinformation Landscape

Decoding Deepfakes: Navigating the Digital Disinformation Landscape

Minute Read 0 views

In brief:

  • Disinformation is a serious global threat, with deepfakes a particular concern.

  • Advancements in deepfake technology are making it easier to manipulate media, and more difficult to detect forgeries.

  • Emerging AI detection and authentication tools, and a ‘trust but verify’ mindset are critical defences against malicious scammers. 

The growing sophistication of deepfake technology has made it more challenging than ever to distinguish genuine and credible media content from hyper-realistic forgeries. But there are ways online users can identify manipulated content, and steps they can take to safeguard their online experience.

Late last year, a video showing Singapore’s Senior Minister Lee Hsien Leong endorsing a cryptocurrency scheme began circulating online, raising eyebrows. This proved to be fake. 

There has been no shortage of manipulated media – nor people falling prey to the scams propagated by them. In its latest Global Risk Report, the World Economic Forum identified disinformation, and deepfakes in particular, as a top global risk.

Globally, three quarters of consumers worry daily about being fooled by deepfakes, and in Singapore, the number is even higher at 88%. Are online users completely at the mercy of malicious content, or is there something we can do to avoid getting duped? 

Dr Dimitry Fisher, Senior Vice President of Data Science at Temasek’s Centre of Excellence for AI, Aicadium, and Lena Goh, Managing Director of Public Affairs at Temasek, answer your Burning Questions on deepfakes and how you can stay safe online.      

What are deepfakes?   

Dimitry: They are typically videos but can also be images or audio recordings that have been manipulated by AI. Basically, a person’s face or voice is replaced with another, making it seem that the person in the image or video is saying or doing things they never actually said or did. While the images may be fake, the harm they cause – from financial loss and political unrest to cyberbullying and blackmail – can be very real. Improvements in voice cloning have also seen an uptick in phone scams. Many unwittingly respond to phone calls that appear to be from family or colleagues, only to be defrauded. 

How are they created? 

Dimitry: Most are created using Generative Adversarial Networks (GANs), essentially two neural networks that learn by competing against each other. The first analyses the source content and generates realistic fakes, and the second evaluates their authenticity. The two networks pass information back and forth, training each other and making deepfakes more and more realistic. 

What makes them dangerous?   

Dimitry: A combination of factors. Deepfake technology is becoming increasingly accessible and easy to use, lowering the barrier to entry. This means that anyone with a computer or smartphone and basic computer skills could conceivably create a deepfake in a matter of minutes, and for little or no money.  

The scale at which content can be produced, and abused, is enormous.       

Lena: It’s especially insidious when deepfakes prey on existing biases or preconceptions about public figures or organisations to manipulate public opinion. They could reinforce these biases, for example, regardless of their truth, and intensify existing tensions. This leads to a progressive erosion of trust in legitimate sources of information, and ultimately, in the authenticity of digital information. 

What kind of damage can that cause? 

Lena: With some two billion voters heading to the polls this year, including in the US, UK, India, and parts of the EU, disinformation warfare is a real threat, with deepfakes the tactic of choice to manipulate voters and sow chaos. Businesses face similar dangers, with deepfakes used to spread contradictory information that confuses stakeholders. The impact can be long-lasting, and difficult to counter, even when it is completely fabricated. 

It is essential that we maintain a vigilant stance, balancing our reliance on technology with a proactive awareness of its pitfalls and ethical considerations.

Lena Goh, Managing Director, Public Affairs, Temasek 

How can you tell if a video is a deepfake? 

Dimitry: We can break this down into three main categories: physical inconsistencies, audio discrepancies, and suspicious content. Look for unnatural expressions, excessive blinking, a lack of eye movements, and awkwardly positioned features. Are there missing shadows? What about the surroundings? Are they in focus, or distorted and blurred?  Listen for overly formal speech, or audio that is out of sync with lip movements. Scammers often generate their own scripts, so unnatural speech patterns are another clue.

Lena: Ask yourself if the behaviour or statement of the public figure is consistent with what is known about them. Check the source of the video – is it from a known, credible source or has it been posted from an anonymous or suspicious account? It is essential that we maintain a vigilant stance, balancing our reliance on technology with a proactive awareness of its pitfalls and ethical considerations.

Deepfakes are made using AI – can’t we simply use AI to detect and disrupt them? 

Dimitry: AI is already being used to scan for inconsistencies that typically suggest manipulation. As AI advances, so will our ability to detect and disrupt deepfakes. Another way AI can be deployed is in authenticating content. We believe that digital watermarking, which embeds information into created content and can be used to identify AI-generated media, is an important step. Cryptographic provenance that tracks the origin of the media and any edits made, is another one. One of the main challenges of using AI to fight AI is the constant cat-and-mouse between attackers and defenders as the technology on both sides gets more sophisticated. Defensive AI technologies will need to constantly evolve to stay ahead. 

What can I do to protect myself? 

Dimitry: Trust but verify. Learn to identify deepfakes and check the source of the video – can you trace it back to a credible source? If you see malicious content, report it to the platform as well as to the authorities – and don’t share it. Also, think about limiting the amount of data you post online, especially high-resolution pictures and videos that may be exploited. Always use the technology cautiously – and ethically – if you use it at all, for work or school.      

Lena: Companies can go a step further, monitoring online platforms for scams and mentions of the company or its senior leadership, and collaborating with social media platforms to swiftly take down malicious content. A network of brand advocates can be a powerful counter to online misinformation. Offline, cultivate a strong, transparent reputation that can act as a buffer against fake news.  Finally, engage with your stakeholders. Educate them on spotting disinformation and encourage them to verify information with trusted sources before acting on it.

Top

News & Insights

Select a type of content
    Please select Stories you are interested in.
    Please give us your consent.
    Please confirm that you are not a robot.

    Subscribe to our newsletter

    Stay up to date with our latest news, insights and stories

    Select a type of content
      Please select Stories you are interested in.
      Please give us your consent.
      Please confirm that you are not a robot.