In early 2023, an employee of the Hong Kong branch of Arup, a multinational engineering company, received a meeting email. The email invited him to attend an important meeting sponsored by headquarters executives.
The employee was a little skeptical at first. After all, almost everyone in the modern workplace has seen phishing emails. But once he followed the link in the email and entered the meeting, he completely let his guard down.
At the meeting, many senior executives, including the company’s CFO, were all present, and all had their videos and microphones turned on. They discussed information about a secret acquisition, in which the employee was not given a chance to speak and executives discussed among themselves and then gave instructions to some "mid-level employees."
If you have ever worked in a large factory, you will feel that this kind of meeting is very natural. For some accidental reason, you were pulled by your superior to attend a high-level meeting, maybe because of the discussion at the meeting. The follow-up work needs to be performed by you. The only problem is that, in Arup’s case, the employee who accessed the meeting through the email link was the only “real person.”
Immediately afterwards, the follow-up work related to the meeting came through instant messaging and emails - a leader asked the employee to transfer 200 million Hong Kong dollars (approximately 180 million yuan) to 5 account.
The employee did not suspect him and transferred the money from the company account 15 times.
It was not until five days later that he realized he had been deceived during his communication with headquarters.
According to a report from the Hong Kong police, fraudsters used the company to obtain the audio and video information of the company’s executives on YouTube and other channels, and then used Deepfake technology to create “virtual persons” of the company’s executives at different levels. , and finally, through script design, the "high-level meeting" that made the employees fall into the trap was pre-recorded, and the scam was completed without leakage.
This may be the first AI fraud case in history involving such a huge amount of money. But sadly, we're almost certain this won't be the last.
Because, just like the development of the Internet industry itself, the explosion of AI technology has ushered in a certain "endgame" for Internet fraud.
In the barbaric era of the Internet, when the "cat" that was indispensable for dial-up Internet access was still making piercing howls, the seeds of Internet fraud had quietly broken ground. At that time, the online world was like an uncultivated wilderness. Laws and regulations had not yet been perfected, and users' security awareness had not yet awakened. This provided a breeding ground for all kinds of online fraud.
In 1994, an email from the "Nigerian Prince" started the Internet fraud. The owner of this email claims to be a member of the Nigerian royal family or a senior government official. His huge assets have been frozen due to a coup or other reasons, and he is in urgent need of a "reliable foreign friend" to help transfer funds. In return, the "prince" promised his victimsHuge commissions.
This sounds like a ridiculous story today, but it deceived the trust of countless people at the time. After all, in that era of limited information, people imagined far more exotic places than they knew about them. In addition, the emails are filled with honorific titles such as "Your Highness" and "Your Excellency", as well as the temptation of huge wealth, causing many people to relax their vigilance and fall into the trap of "Nigerian Princes".
The success of the "Nigerian Prince" scam has spawned a large number of imitators. All of a sudden, various "princes", "princesses" and "chiefs" appeared one after another, performing "palace dramas" one after another. Although the contents of these scam emails are similar, they can always find new prey. It is estimated that in the 1990s alone, the "Nigerian Prince" scam caused losses of millions of dollars to those who were fooled - this was an era before even online remittances were born.
With the rise of e-commerce, the methods of online fraud are becoming increasingly diversified. In 1995, Amazon was founded, marking the arrival of the e-commerce era. However, this has been accompanied by the proliferation of credit card fraud and phishing websites.
The scheme of credit card fraud is very simple. Scammers obtain the victim’s credit card information through various means, and then steal the credit card. These methods include:
Fake websites or emails: Scammers will imitate the website or email of a bank or e-commerce platform to trick victims into entering credit card information. Installing malware: Scammers will distribute malware via email or other means to steal victims' credit card information. Social engineering: Scammers will pretend to be bank or e-commerce platform customer service through phone calls or other methods to trick victims into providing credit card information.In the barbaric era of the Internet, although online fraud was relatively simple, the harm it caused should not be underestimated. These early cases laid the foundation for the subsequent development of online fraud. We can find that some of these deceptions from the barbaric era are still active in our online lives.
In the 21st century, the Internet is no longer a novelty, but has been integrated into daily life. However, online fraud is also rapidly evolving in this increasingly familiar field, with endless methods that make it difficult to guard against. Internet fraud is no longer the 1.0 era, which only uses "words" and "routines", but has begun to use more technical means to deceive the vast number of netizens who have just come online and do not know much about the Internet.
In 2003, the "AOL phishing" incident occurred in the United States that shocked the United States. Millions of AOL users received an email that appeared to be from AOL officials, asking them to update their account information. However, after clicking the link in the email, the user was directed to a fake AOL website. Without their knowledge, users enter their usernames, passwords, and even credit card information, which is then stolen by scammers.
With the popularity of broadband networks, Trojan viruses and ransomware have also begun to wreak havoc on the Internet. In 2007, a product called “Storm Worm" Trojan virus broke out around the world. The virus spreads through email. Once it infects the computer, it will steal the user's personal information and send it to scammers. What's even more frightening is that "Storm" Worm" can also turn infected computers into part of a "botnet" for sending spam emails, launching DDoS attacks, etc.
In 2017, a ransomware called "WannaCry" swept across Global. This software exploits Windows system vulnerabilities, encrypts victims’ files and demands payment of ransom Jin was able to decrypt it. In just a few days, "WannaCry" infected hundreds of thousands of computers around the world, causing huge economic losses.
The rise of social media also marked the beginning of the 21st century. Online fraud has provided a new platform. Social platforms such as Facebook and Twitter have become a paradise for scammers. /p>
Emotional scams, commonly known as “pig-killing scams” in China, have actually become popular around the world as early as this era. Scammers usually pretend to be tall, rich, handsome, or pretty, and get to know their victims on social media. By using sweet words and asking questions, the scammer gradually gains the victim's trust, and then uses various reasons to trick the victim into transferring money. < /p>
In 2018, a young woman named "Anna Sorokin" caused a storm in New York social circles. She claimed to be the heir of a wealthy German man and owned a huge wealth and art collection. Visiting high-end hotels and restaurants, meeting celebrities and wealthy people, and living a luxurious life. However, all of this is based on lies.
Anna Sorokin is actually an immigrant from Russia. By forging bank statements and fabricating a false identity, she successfully defrauded bank loans, the trust of friends, and even stayed in luxury hotels for free. Her story was exposed by the media. Later, it shocked the entire New York social circle.
Anna Sorokin’s case is not an isolated case on social networks. On the Internet, similar "fake identity attacks" are emerging one after another. Fraudsters use forged personal information, publish false information, and even use tools such as Photoshop to create fake photos to package themselves as tall, rich, handsome, white, beautiful, or other attractive images. . They take advantage of people’s yearning for a better life and their disdain for information on social networks. letter, luring victims step by step
In 2019, an Israeli man named "Simon Leviev" pretended to be the son of a diamond tycoon on Tinder and defrauded the feelings of many women. and money. He creates an image of a rich second generation by renting private jets, luxury cars, and showing off his wealth on social media. Without the victim's trust, he borrowed money for various reasons and then disappeared without a trace.
The development of mobile Internet has also given rise to new fraud methods such as mobile phone text message fraud and QR code fraud. The perpetrators send false lottery winning information through text messages, pretend to be bank customer service to send phishing links, or post advertisements with malicious QR codes in public places.If the victim falls for the scam, he or she will suffer financial losses.
As the year comes to 2020, AI technologies in the fields of images, sounds, text and other media have been upgraded one after another, and Internet fraud has finally entered a "new level".
Just like the productivity shown in other fields, AI is not just a tool, but a master that can connect all previous fraud technologies, ideas and methods.
With the continuous upgrading of Deepfake technology, fake videos have gradually reached the point where they can look real. Not only can the victim see the familiar face of the scammer, but even the subtle expressions and tone are the same as the real person. Just imagine, when your relatives, friends, colleagues, or even leaders appear in the video and ask you for help in their usual tone, can you still remain vigilant?
Multi-modal attacks push the deception of fraud to the extreme. Scammers can comprehensively use various AI technologies such as speech synthesis, image processing, and natural language processing to create a comprehensive "immersive" fraud experience. You may receive a call from a relative or friend imitated by AI, receive an email generated by AI, or even see a video synthesized by AI, all of which point to the same scam target. In this case, even the most cautious person may have difficulty distinguishing between truth and falsehood.
AI’s automation and scale capabilities make fraud more efficient and cost-effective. Scammers only need to write a script, and AI can automatically generate a large amount of fraudulent content and spread it through social media, text messages, emails and other channels. This kind of "casting a wide net" type of attack exposes more people to the risk of fraud.
What is even more worrying is that the targets of AI fraud are no longer limited to individual users, but have begun to target enterprises, governments and other institutions. Just imagine the serious consequences if a bank's system is breached by an AI fraud gang, or if a government website is flooded with false information forged by AI.
The ultimate form of AI fraud is like an "arms race" in the digital age. Scammers use AI to continuously upgrade their weapons, and we need to constantly improve our defense capabilities. This is a war without gunpowder, but its consequences may be more serious than a real war.
From the perspective of the "scammer", the aforementioned scam against Arup seems to have room for "upgrading the user experience" - if you have watched some short videos recently, you will find some high-quality Digital people can already achieve seamlessness during the live broadcast process.
This means that in the future, fraudsters can even put on the face and body of your boss and interact with you directly in a video call instead of letting you "observe" a meeting. This also means that simply asking him to "wink his eyes," "wave his hands," or "turn his head" in front of the camera can no longer be used as a credible verification method.
A report released by Deloitte in May 2024 predicts that as the quality of generative artificial intelligence becomes better and better, the price will become lower and lower.The possibility of it being used for fraud will become increasingly high. The report predicts that by 2027, in the United States alone, losses caused by generative artificial intelligence fraud will increase to $40 billion, with a compound annual growth rate of 32%. Even a conservative forecast puts this figure at as high as $22 billion.
In order to obtain greater profits, artificial intelligence fraud gangs are more likely to target financial institutions rather than individual users. In the past context, individual victims were more likely to be affected by fraud, while financial institutions were often a "no-go zone" for fraud gangs due to their complete security and risk control measures.
However, due to the rapid advancement of generative artificial intelligence technology, the conservative strategy of traditional financial institutions that is too slow in technological evolution may make them more vulnerable targets.
Coincidentally, on June 11, one month after the above-mentioned Deloitte report was released, the digital cryptocurrency exchange OKX was attacked by an organized attack by an AI fraud gang. The gang used Deepfake AI to bypass the exchange's facial recognition security measures for users and transferred $11 million worth of digital cryptocurrency from several accounts within 25 minutes.
In another report released by the European digital identity company Signicat on May 30, 2024, according to their sampling assessment, 42.5% of the frauds that have been detected so far use artificial intelligence technology. . Among these fraud attacks using artificial intelligence, 29% bypassed the corresponding security policies. The report also highlights that global identity fraud attacks have increased by 80% in the past three years. Fraudulent attacks driven by Deepfake increased by 2137%.
Deepfake is not the only technology used for fraud. In terms of "fraud scripting" and "fraud phrase generation", the large language models that have exploded since the end of 2022 have also "shown their talents."
In a paper titled "Designing and Detecting Phishing Emails Using Large Language Models" published on IEEE in March 2024, the success rates of generating phishing emails in three ways were compared.
The differences are: let GPT generate it directly (click-through rate 37%), manually follow a guide to write scam emails (click-through rate 74%), and follow the guide after being generated by GPT (click-through rate 62 %).
In other words, after reasonable fine-tuning and automated design, writing fraud emails with a phishing success rate of more than 50% will no longer require fraudsters to "do it themselves." Although AI seems to be less skilled than real people in this regard, AI can work "around the clock".
When AI liberates the labor force of "scammers" just like it liberates the labor force in other positions, we will usher in thousands of people with different faces, unlimited supply, and personalized customized fraud emails and copywriting, which will further disrupt potential Victim recognition of fraud attacks.
But the good news is that GPT-4’s recognition rate of fraudulent emails is alsoAs high as 98.4% - regardless of whether these scam emails are written by humans or AI - this probability far exceeds any group of humans.
In the face of all-round, full-process, and full-simulation online fraud launched by fraudsters, improving the digital literacy and anti-fraud awareness of the entire population may be the most effective, but it is also one of the slowest methods. one.
The "Asia Fraud Report" released by the independent anti-fraud company Gogolook and the Global Development Alliance in November 2023 shows that 40% of fraud victims in China and Japan blame their misfortune on their own failure to Identify fraud promptly. Compared with Europe and the United States, ordinary people in Asia, which rely more on online payments and online shopping, are more likely to encounter fraud due to digital literacy gaps.
The report also believes that carrying out special education on preventing online fraud for sensitive groups is a very effective measure.
In March 2021, China launched the National Anti-Fraud Center App, which is the world’s first anti-fraud system that integrates active defense and anti-fraud education and training. According to official data, in the first nine months of its launch, the National Anti-Fraud Center successfully prevented more than 320 billion yuan in fraudulent transfers, intercepted 1.55 billion fraudulent calls, and successfully prevented more than 28 million people from being deceived.
Of course, preventing fraud cannot entirely rely on the improvement of the knowledge level of potential victims. It also requires the joint efforts of all parties in society at the source and intermediate links.
At the source, on the one hand, there are requirements for AI service providers: providers of AI products or services should be aware of the possibility of AI being abused during the process of releasing or operating their products, and Or measures to increase security on the service, such as:
Digital watermark: Embedding invisible digital watermarks in AI-generated content for identification and tracking. Content Moderation: Use another independent AI review of user-submitted generation requests to filter out content that may be used for fraud. User education: Popularize the risks and preventive measures of AI fraud to users, and improve users’ security awareness.On the other hand, it is to strengthen legal supervision over the abuse of AI technology, clarify the legal liability for AI fraud, and increase the crackdown on AI fraud.
In the intermediate link, financial institutions and technology companies need to strengthen cooperation, share anti-fraud intelligence, and jointly develop more advanced anti-fraud technologies. For example:
Multi-factor authentication: In addition to face recognition, you can also use fingerprints, voiceprints, SMS verification codes and other methods for identity verification. Abnormal behavior detection: Use machine learning algorithms to analyze user behavior patterns, detect abnormal operations in a timely manner and issue alerts. Real-time risk assessment: Conduct real-time assessment of risks during the transaction process, and take corresponding preventive measures according to the risk level.In addition to technical means, financial institutions also need to strengthen employee training, improve their ability to identify AI fraud, establish a complete internal reporting mechanism, and encourage employees to report suspicious activities in a timely manner.
Of course, the prevention of AI fraud is a long-term and complicated process., it requires the joint efforts of governments, enterprises, individuals and other parties to effectively curb the spread of AI fraud.