Little do you realise, you’ve been duped by a deepfake.

What are deepfakes?

The eSafety Commissioner of Australia defines deepfakes as digital photos, videos or sound files of real individuals that have been edited to create extremely realistic yet false depictions of them. Think of those viral videos or news stories where celebrities or politicians are made to say or do things they never actually did. Business leaders like Dick Smith, Twiggy Forest and Gina Rinehart are among the high-profile Australians to be ensnared in deepfake scams where manipulated video footage is used to promote bogus investment schemes.

“Deepfakes used to be pretty rudimentary, but they’re now a lot better”, explains Wayne Williamson, Chief Information Security Officer at Equifax. What started as a joke has turned into a serious problem, with people using deepfakes to spread misinformation and enable fraud. And as the technology improves, telling the difference between real and fake is getting harder and harder. Deepfake technology, which is also known as ‘artificial intelligence-generated synthetic media’, draws upon sophisticated machine learning algorithms. The advent of generative AI and user-friendly tools and techniques has meant that anyone with a computer or smartphone can fabricate deepfake images or videos in minutes.

“You can find websites and apps where you can pick a famous person’s voice, type in whatever you want them to say, and boom, you’ve got a realistic-sounding recording”, says Mr Williamson.

“As the US election draws near, expect to see plenty of fake recordings of Joe Biden and Donald Trump popping up online. And once these audio deepfakes are shared on social media, they spread with alarming speed.”

The business implications of deepfakes

In the business world, deepfakes manifest as photos, voices and live videos impersonating key personnel such as executives, financial officers, supply chain partners, or senior IT and call centre staff.

Just a few months ago, fraudsters orchestrated a phoney multi-person video conference to deceive a finance worker in Hong Kong into transferring AU$39 million. The worker thought he was talking to the company’s CFO and other colleagues, but they were all deepfake versions. Since they looked and sounded just like his real coworkers, he went ahead with the transfer. It was only later when he double-checked with the company’s head office that the scam was exposed.

“The old saying ‘seeing is believing’ soon won’t be enough to protect people and businesses”, notes Mr Williamson.

By personalising messages and mimicking voices, it becomes easier for bad actors to pretend to be someone the target knows, gaining access to bank accounts or sensitive commercial data.

“With phishing attacks a favourite tactic among cybercriminals, we’ll likely see deepfakes used more to amplify their impact”, he warns.

Another danger zone is in mergers and acquisitions. “Deep fake videos featuring CEOs or CFOs could spread false information or allegations, causing stock prices to swing or damage reputations.”

How to defend against deepfakes

The fight against fraud has always been akin to a cat-and-mouse chase, and tackling deepfakes is no different. While high-quality fakes pose a significant challenge, cybersecurity solutions continuously refine their ability to scrutinise and identify potential irregularities.

To mitigate the risks of deepfakes, Mr Williamson advises business leaders and CISOs to consider the following:

  1. Improve awareness
    Organisations must instil in their workforce the importance of verifying all content, whether in their inbox, personal environment or on social platforms. Fostering a culture of scrutiny is crucial, where employees grasp the necessity of examining anything that seems off or even slightly suspicious.
    Even when an individual seems entirely authentic in appearance or voice, it’s essential to cultivate a mindset that acknowledges the possibility that the person might not be who they say they are. When in doubt, assess the contextual relevance of the interaction - is this how you would expect this person to communicate with you?
  2. Recognise telltale signs
    While advanced deepfakes may elude human detection, less sophisticated attempts often leave traces that give them away. Keep an eye out for anomalies around the cheeks, forehead, eyes and eyebrows. Look for skin texture or tone inconsistencies, unusual blurring, shadows at the wrong intensity, or not positioned where they should appear naturally.
    For individuals wearing glasses, glare angle or intensity discrepancies may be apparent. Also, be wary of unnatural eye movements or irregular blinking patterns. In cases of lip-synced deepfakes, discrepancies between audio and visual clues may be noticeable.
    Conduct periodic phishing and deepfake simulations to instil a sense of vigilance among employees and encourage reporting of potential threats.
  3. Embrace a multi-dimensional approach
    Fighting deepfakes isn’t just up to security or IT - it’s a team effort. Cybersecurity preparedness and spotting deepfake trickery are collective responsibilities, necessitating collaboration between individuals, businesses, government bodies and the cybersecurity sector.
    ​Maintaining an always-on, future-state mindset against threat actors requires a multi-dimensional approach combining robust verification processes, advanced analytics, and ongoing education and training.

Equifax is constantly raising the bar to out-smart, out-work, and out-innovate cyber criminals. Contact us to learn more about how our differentiated data, innovative analytics and advanced technology assists in the prevention of cybercrime.

Contact us


 

 

Related Posts

While PEP, sanctions and adverse media screening are vital for customer due diligence, false positives create unnecessary delays and frustration. These inaccurate matches waste time and resources, slowing down onboarding and impacting the customer experience.

So, how can you optimise your screening process and minimise false positives?

Read more

When it was announced in 2017 that the world’s most valuable resource is no longer oil but data, organisations were already leveraging data to manage credit risk, predict future trends, and unlock new revenue systems to drive business growth. 

Read more