Deepfakes on social media platforms and those using email addresses and simulating voices are becoming increasingly common. For example, over $25 million was stolen from a large company in Hong Kong after an employee used deepfake simulations of their CFO and colleagues to receive payment information over Zoom1. This use of a deepfake indicates a rise in technology and AI aiding in fraud attacks. Impersonation can cause a wide range of losses, many resulting in direct payment of funds to cybercriminals. In 2023, the Internet Crime Complaint Center (IC3) received 21,489 business email compromise (BEC) complaints, resulting in over $2.9 billion in adjusted losses2.
Companies should think critically about utilizing methods to prevent fraudulent funds transfers that occur with information obtained through social engineering, information theft or deepfake technology.
Entities should avoid using standard email for wiring instructions or mentioning wire payments, as they are susceptible to infiltration exploits. Two-step verifications of any payments are critical when you cannot be certain of authenticity at first glance. For example, one phone call to the CFO at a known number would have prevented the $25 million loss in Hong Kong. Confirmation of instructions through an independent channel to a known individual at a known phone or contact point is now essential for large payments.
Cyber policies can cover a deepfake fraud, but only if they incorporate a social engineering provision alongside the other computer fraud provisions often incorporated in cyber policies. Many cyber policies do not include the loss of funds or social engineering extensions that must be requested. Narrower computer fraud and computer theft coverage often require a computer system breach. Most social engineering losses are implemented by an authorized system user; such coverages will not be broad enough. Social engineering coverage will be more extensive, triggered by the misrepresentation or fraudulent act that misleads an employee authorized to carry out financial transactions without restriction on how the payment is made.
A comprehensive crime policy can cover monetary loss to an insured caused by a social engineering event; however, there is significant variation between insurers on the breadth and depth of the coverage. For example, many insurers limit coverage to claims where there has been “callback verification.”
A review of the social engineering insuring agreement should:
While sublimits are common, many crime insurers are willing to write excess social engineering over primary sublimits at competitive rates. In addition, coordination with the cyber policy is imperative to determine which policy responds to a covered event as primary – crime is usually the better place. Most crime insurers have not embraced the expansion of social engineering to include AI deepfakes in their policies; thus, a careful review of the terms and conditions should be considered for broader language that contemplates AI deepfake exposures.
The pace of AI technology development will mean more sophisticated fraud in the coming years. Insurance markets can provide protection but will seek to have processes and procedures implemented. Entities should seek to align relevant insurance policies and ensure that language is sufficiently broad to cover intended events. Brown & Brown can help in this regard. Most importantly, companies should stay current with fraud methods and keep staff familiar with the most up-to-date controls.
1. A $25 Million Hong Kong Deepfake Scam on Zoom Shows New AI Risks - Bloomberg
2. 2023_IC3Report.pdf
3. 2023_IC3Report.pdf
Senior Managing Director
Senior Managing Director