In a recent case of an online scam on WhatsApp, scammers used artificial intelligence (AI)-based deepfake technology to convince a man from Kerala to transfer Rs 40,000. The caller impersonated his former colleague and requested money for a medical emergency. When another demand was made, the victim realised he was at the receiving end of a sophisticated fraud, but by then the damage had been done.
A similar fraud in northern China that used sophisticated deepfake technology to convince a man to transfer money to a supposed friend has sparked concern about the potential of AI techniques to aid financial crimes. In recent years, deepfake technology has emerged as a powerful tool for creating realistic videos or audio recordings that can be difficult to distinguish from real footage. While initially used for entertainment purposes, the potential impact of deepfakes on cybersecurity is a growing concern.
Get ready for new threats
One main misuse case of AI by cybercriminals is in generating deepfake videos and images to phish users and bypass security measures, stresses Vishak Raman, vice-president of sales, India, SAARC, SEAHK & ANZ at Fortinet. This is particularly prevalent on social media sites to create fake identities. “The recent incidents of deepfake scams show that photos, voices and videos can be utilised by scammers. There is evidence that AI and ML are being used by cybercriminals to circumvent protective security measures deployed by organisations too,” Raman cautions.
What are deepfakes? Put simply, deepfakes are AI-generated videos or audio recordings that are created by using machine learning algorithms to replace a person’s face or voice with someone else’s. Raman explains that cybercriminals can use deepfake technology to create scams, false claims, and hoaxes that undermine organisations. For example, an attacker could create a false video of a senior executive admitting to criminal activity, such as financial crimes, or making false claims about the organisation’s activity. Aside from costing time and money to disprove, this could have a major impact on the business’s brand, public reputation, and share price.
A double-edged sword?
“While AI and ML has helped increase the efficiency of cybersecurity systems deployed in the organisation, it has also led to the cybersecurity threat landscape witnessing a menacing rise in AI-driven cybercrime,” says Vijendra Katiyar, country manager for India & SAARC at Trend Micro.
One concerning trend is the abuse of AI technologies, particularly through virtual kidnapping schemes. Malicious actors exploit AI to craft heart-wrenching deepfake audios, convincing victims that their loved ones are in danger, demanding exorbitant ransoms, basically perpetrating emotionally manipulative attacks. “AI-powered chatbots like ChatGPT are already being used to automate and expand attacks, and that will only continue to grow – making it a scalable menace,” he says.
According to Katiyar, traditional ransom techniques will shift towards voice and video, as well as explore new realms like the metaverse, complicating security efforts.
“This can be countered by creating a paradigm shift wherein defenders, just like attackers, also start leveraging AI and ML and create asset graphs,” he points out.
Leave a comment