Deepfake Fraud Threatens CFOs: Protecting Corporate Finance

Multifactorial verification and other precautions become essential because AI allows more sophisticated scams.

Video and telephone calls are generally allocated to poor service or external cause. But if you notice unusual white hairs around the edge of the beard of your CFO just before a gel, and when the call resumes a few seconds later, the beard is again in jet black, should you follow its instructions to transfer funds?

Perhaps, but not without any other verification. Fraudsters, helped by AI applications, can one day – instead, even – Deepfake audio and video calls. But even now, “tells” may indicate that something is wrong, and the temporary frost could actually do the AI.

“I recently tested a platform that had a feature designed to help mask artefacts, seeds or synchronization problems,” recalls Perry Carpenter, chief human risk management at Knowbe4, a safety awareness and behavioral change platform. “The program would freeze video on the last good Deepfake framework to protect the identity of the person who fakefake. It is clear that some attackers use adaptive strategies to minimize detection when their fake Deepfakes begin to fail.”


“There should never be an immediate need to wire a large sum of money without first checking [it]. “”

Perry Carpenter, Head of human management-management, Knowbe4


To what extent these attacks are successful or even tempted are not clear because companies generally keep this information under Wraps. An important attack reported last year by CNN and others involved a corporate financial director based in Hong Kong of the British engineering company Arup, who watched an e-mail requesting a secret payment of 25 million dollars. He still sent the money, after a video call with several people who looked like colleagues – but were, in fact, Deepfakes.

In another incident reported by The Guardian last year, the crooks used a photo accessible to the public of Mark Read, CEO of the WPP advertising giant, to establish a false WhatsApp account. This account was used in turn to set up a meeting of Microsoft teams which used a vocal clone of a frame and a reading imitated via a discussion window to target a third frame, in order to request money and personal details.

A WPP spokesperson confirmed the accuracy of the Guardian account, but refused to explain how the scam was thwarted, noting only: “It is not something that we are impatient to Religer.”

Self-crushed deepfake corrective

Unlike Deepfake video clips, which are extremely difficult to detect, voice and video in real time via social messaging platforms are always subject to errors, explains Carpenter. While the anterior deep buttocks had obvious races, such as facial deformation, non-natural indicators or inconsistent lighting, new models are starting to self-corrigerate these irregularities in real time.

Consequently, Carpenter does not form customers on technical defects often Flesting, as this can lead to a false feeling of security. “Instead, we have to focus on behavioral clues, context inconsistencies and other breeds such as the use of increased emotion to try to get an answer or a reaction,” he said.

Rapid Deepfake Evolution presents a particularly important risk for business financing services, given their control over the object of the desire for fraudsters. The distribution of a new code word to verify identities, perhaps daily or even by transaction, is an approach, explains Stuart Madnick, professor of information technology at MIT Sloan School of Management. There are different ways to do it safely.

When business financing managers who deal with major funds are well familiar, they can test their vocal or video counterparts by asking semi-personal questions. Madnick asked alleged colleagues what their “brother Ben” thinks of a problem, when there is no such brother.

An intelligent, but not permanent solution, warns Madnick: “The problem is that AI will learn all your brothers and sisters.” In the end, all companies should use multifactive authentication (MFA), which strengthens security by requiring verification from several sources; Most large companies have largely implemented it. But even then, some critical departments may not use MFA in a coherent way for certain tasks, notes Katie Boswell, American who obtains an AI chief in KPMG, leaving them sensitive.

“It is important that corporate leadership collaborates with their IT and technological teams to ensure that effective cybersecurity solutions, such as MFA, are in the hands of those who are most likely exposed to deep attacks,” she urges.

Perry Carpenter
Perry Carpenter, Head of human management-management, Knowbe4

Identification of multiple facets scams

Even with the MFA, sneaky fraudsters can use social media and online resources and use AI to mention invoices and other authentic documents, and the video and / or audio Deepfake, create stories sufficiently persuasive to convince the leaders to make decisions that they regret later. This makes the training critical, packaging of large, large sums of money to stop automatically when they receive unusual requests and require additional verification.

“There should almost never be any immediate need to wire a large sum of money without first checking by a known internal channel,” explains Carpenter. An interlocutor who communicates on a private phone or account is also problematic, especially if he resists the conversation to the company’s secure systems. Stratages such as adoption in an emergency, authority or high emotion tone are also red flags, “so it is essential that people give permission to take a break and check,” he said.

While two or more checks help, companies must always ensure that their sources of verification are secure. Madnick remembers a customer company losing money when a fraudster has spent a false check. Susposed, the bank qualified the company’s corporate finance service to check the transaction, but the fraudster had already asked the telephone company to reach a number where it validated the check.

“Companies can set up procedures with their telephone company which require that they never relaunch calls without further verification with the company,” explains Madnick. “Otherwise, it is at the discretion of the telephone company.”

Given the appearance for company fraudsters, KPMG Boswell highlights the importance of keeping up to date with emerging threats. Since financial directors and other main financial leaders must focus on their immediate functions, we cannot expect that they read the latest research on deep attacks. But companies can establish policies and procedures that guarantee information, or other experts, regularly update them, sensitizing finances to the latest types of attacks, both internally and other companies.

Madnick regularly asks companies’ finance leaders to hand over if they know that their services have faced cyberattacks. Many do not do so.

Katie Boswell, KPMG
Katie BoswellUnited States securing the AI ​​leader in KPMG

“The problem is that cyber attacks continue on average more than 200 days before discovering,” he says. “So they may think that they have not suffered an attack, but they are simply not aware of it yet.”

Companies’ financing can also include Deepfake scenarios in its risk assessments, including table exercises incorporated into company security initiatives. And employees should be encouraged to report unsuccessful attacks, or what they believe they have been attacks, which they could otherwise reject, advises Boswell.

“In this way, other members of the organization are aware that it has potentially been targeted and what to look for,” she said.

In addition, while C-Suite managers of large companies can have important public profiles, the information available externally on lower level executives and departments such as accounts and debtors must be limited. “Threat actors use this type of information more frequently using AI, to help manipulate targets through social engineering,” notes Boswell. “If they do not have access to this data, they cannot incorporate it into the attacks.”

Such precautions are only becoming more important, because deep fraudsters are widening and deepening their scope. Although they have spread the fastest in major economies such as the United States and Europe, even countries whose populations use fewer common languages ​​are increasingly exposed.

“Most criminals may not know the Turkish, but what is great in AI systems is that they can speak just any language,” warns Madnick. “If I were a criminal, I would target companies in countries that have been less targeted in the past because they are probably less prepared.”

Leave a Comment