Wikipedia describes ‘deepfakes’ as media that take a person in an existing image or video and replace them with someone else's likeness using artificial neural networks. They often combine and superimpose existing media into source media using machine learning techniques known as autoencoders and generative adversarial networks.
In plain speaking terms, deepfakes are being created using technology that is easy to access and often used for unethical activities. Basically, deepfakes can be made by anyone with a computer, internet access, an interest in influencing an outcome or doing something illegal, and with a lack of concern with the associated ethical and legal ramifications.
The idea of identity morphing started more as a novelty activity generated by smartphone apps where people changed their appearance for fun, but in recent times, with deepfakes, this has taken a much more sinister turn. It’s not just images and videos that are being manipulated, voice deepfakes have already been used to commit fraud.
According to the Wall Street Journal, there may soon be serious financial and legal ramifications in the proliferation of deepfake technology. The publication cited a recent case where criminals used artificial intelligence-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($243,000).
The CEO of a UK-based energy firm thought he was speaking on the phone with his boss, the chief executive of the firm’s German parent company, who asked him to send the funds to a Hungarian supplier. The caller said the request was urgent, directing the executive to pay within an hour, according to the company’s insurance firm.
The report also added that several officials believed the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI, but it is unclear whether this is the first attack using AI or whether there are other incidents that have gone unreported or in which authorities didn’t detect the technology in use.
However, just like in the case of deepfake images and videos, companies are working on services and apps that imitate voices for reasoning that still remains unclear.
The Verge reports that “Google’s controversial Duplex service that uses AI to mimic the voice of a real human being so that it can make phone calls on a user’s behalf. A number of smaller startups, many of which are located in China, are offering up similar services for free on smartphones, sometimes under questionable privacy and data collection terms.”
If you are a telco, financial institution or any business that uses facial or voice recognition to verify the identity of a customer or client then alarm bells must be sounding, and very loudly – but help may be at hand.
Researchers at tech companies and in academia are reportedly working on technology to detect deepfakes, but short-term results and complete detection don’t look promising. Other researchers are unearthing the extent to which a convincing deepfake can be generated and purposed even by converting a single photo and audio file into a talking or singing video portrait.
Despite news that some of these researchers are developing tools that can detect deepfakes with greater than 90% accuracy, everyone in the security industry knows that the remaing 10% undetected is the part fraudsters will target and ultimately benefit from. In the meantime, constant vigilance and monitoring using tools that detect abnormal patterns may still be our best hope. That is until AI detection tools are readily available and much more effective.
Next month, I'll be attending Mobile World Congress 2020 and I invite you to attend on February 27 at 11:00am the session that will be moderating titled: "Dangerous Deepfakes & Public Distrust: Debating & Combatting Weaponization of AI". Don't miss it and gain insight into the alarming potential of AI deepfakes to distort the truth and understand the countermeasures that can be deployed by social media companies.
If you want to know more, please contact us.